- Creating infrastructure machine sets
- OKD infrastructure components
- Creating infrastructure machine sets for production environments
- Creating infrastructure machine sets for different clouds
- Sample YAML for a compute machine set custom resource on Alibaba Cloud
- Sample YAML for a compute machine set custom resource on AWS
- Sample YAML for a compute machine set custom resource on Azure
- Sample YAML for a compute machine set custom resource on Azure Stack Hub
- Sample YAML for a compute machine set custom resource on IBM Cloud
- Sample YAML for a compute machine set custom resource on GCP
- Sample YAML for a compute machine set custom resource on Nutanix
- Sample YAML for a compute machine set custom resource on OpenStack
- Sample YAML for a compute machine set custom resource on oVirt
- Sample YAML for a compute machine set custom resource on vSphere
- Creating a compute machine set
- Creating an infrastructure node
- Creating a machine config pool for infrastructure machines
- Creating infrastructure machine sets for different clouds
- Assigning machine set resources to infrastructure nodes
- Moving resources to infrastructure machine sets
Creating infrastructure machine sets
This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. |
You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
OKD infrastructure components
The following infrastructure workloads do not incur OKD worker subscriptions:
Kubernetes and OKD control plane services that run on masters
The default router
The integrated container image registry
The HAProxy-based Ingress Controller
The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects
Cluster aggregated logging
Service brokers
Red Hat Quay
Red Hat OpenShift Data Foundation
Red Hat Advanced Cluster Manager
Red Hat Advanced Cluster Security for Kubernetes
Red Hat OpenShift GitOps
Red Hat OpenShift Pipelines
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
For information about infrastructure nodes and which components can run on infrastructure nodes, see the “Red Hat OpenShift control plane and infrastructure nodes” section in the OpenShift sizing and subscription guide for enterprise Kubernetes document.
To create an infrastructure node, you can use a machine set, label the node, or use a machine config pool.
Creating infrastructure machine sets for production environments
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
Creating infrastructure machine sets for different clouds
Use the sample compute machine set for your cloud.
Sample YAML for a compute machine set custom resource on Alibaba Cloud
This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
name: <infrastructure_id>-<infra>-<zone> (3)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: machine.openshift.io/v1
credentialsSecret:
name: alibabacloud-credentials
imageId: <image_id> (4)
instanceType: <instance_type> (5)
kind: AlibabaCloudMachineProviderConfig
ramRoleName: <infrastructure_id>-role-worker (6)
regionId: <region> (7)
resourceGroup: (8)
id: <resource_group_id>
type: ID
securityGroups:
- tags: (9)
- Key: Name
Value: <infrastructure_id>-sg-<role>
type: Tags
systemDisk: (10)
category: cloud_essd
size: <disk_size>
tag: (9)
- Key: kubernetes.io/cluster/<infrastructure_id>
Value: owned
userDataSecret:
name: <user_data_secret> (11)
vSwitch:
tags: (9)
- Key: Name
Value: <infrastructure_id>-vswitch-<zone>
type: Tags
vpcId: ""
zoneId: <zone> (12)
taints: (13)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc ) installed, you can obtain the infrastructure ID by running the following command:
|
2 | Specify the <infra> node label. |
3 | Specify the infrastructure ID, <infra> node label, and zone. |
4 | Specify the image to use. Use an image from an existing default compute machine set for the cluster. |
5 | Specify the instance type you want to use for the compute machine set. |
6 | Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set. |
7 | Specify the region to place machines on. |
8 | Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one. |
9 | Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed. |
10 | Specify the type and size of the root disk. Use the category value that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes for size . |
11 | Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set. |
12 | Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. |
13 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
Machine set parameters for Alibaba Cloud usage statistics
The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups
, tag
, and vSwitch
parameters of the spec.template.spec.providerSpec.value
list.
When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed.
The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required.
Tags in spec.template.spec.providerSpec.value.securityGroups
spec:
template:
spec:
providerSpec:
value:
securityGroups:
- tags:
- Key: kubernetes.io/cluster/<infrastructure_id> (1)
Value: owned
- Key: GISV
Value: ocp
- Key: sigs.k8s.io/cloud-provider-alibaba/origin (1)
Value: ocp
- Key: Name
Value: <infrastructure_id>-sg-<role> (2)
type: Tags
1 | Optional: This tag is applied even when not specified in the compute machine set. |
2 | Required. where:
|
Tags in spec.template.spec.providerSpec.value.tag
spec:
template:
spec:
providerSpec:
value:
tag:
- Key: kubernetes.io/cluster/<infrastructure_id> (2)
Value: owned
- Key: GISV (1)
Value: ocp
- Key: sigs.k8s.io/cloud-provider-alibaba/origin (1)
Value: ocp
1 | Optional: This tag is applied even when not specified in the compute machine set. |
2 | Required. where |
Tags in spec.template.spec.providerSpec.value.vSwitch
spec:
template:
spec:
providerSpec:
value:
vSwitch:
tags:
- Key: kubernetes.io/cluster/<infrastructure_id> (1)
Value: owned
- Key: GISV (1)
Value: ocp
- Key: sigs.k8s.io/cloud-provider-alibaba/origin (1)
Value: ocp
- Key: Name
Value: <infrastructure_id>-vswitch-<zone> (2)
type: Tags
1 | Optional: This tag is applied even when not specified in the compute machine set. |
2 | Required. where:
|
Sample YAML for a compute machine set custom resource on AWS
This sample YAML defines a compute machine set that runs in the us-east-1a
Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-infra-<zone> (2)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> (2)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: infra (3)
machine.openshift.io/cluster-api-machine-type: infra (3)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> (2)
spec:
metadata:
labels:
node-role.kubernetes.io/infra: "" (3)
providerSpec:
value:
ami:
id: ami-046fe691f52a953f9 (4)
apiVersion: awsproviderconfig.openshift.io/v1beta1
blockDevices:
- ebs:
iops: 0
volumeSize: 120
volumeType: gp2
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: <infrastructure_id>-worker-profile (1)
instanceType: m6i.large
kind: AWSMachineProviderConfig
placement:
availabilityZone: <zone> (6)
region: <region> (7)
securityGroups:
- filters:
- name: tag:Name
values:
- <infrastructure_id>-worker-sg (1)
subnet:
filters:
- name: tag:Name
values:
- <infrastructure_id>-private-<zone> (8)
tags:
- name: kubernetes.io/cluster/<infrastructure_id> (1)
value: owned
- name: <custom_tag_name> (5)
value: <custom_tag_value> (5)
userDataSecret:
name: worker-user-data
taints: (9)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
| ||
2 | Specify the infrastructure ID, infra role node label, and zone. | ||
3 | Specify the infra role node label. | ||
4 | Specify a valid Fedora CoreOS (FCOS) Amazon Machine Image (AMI) for your AWS zone for your OKD nodes. If you want to use an AWS Marketplace image, you must complete the OKD subscription from the AWS Marketplace to obtain an AMI ID for your region.
| ||
5 | Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:admin-email@example.com .
| ||
6 | Specify the zone, for example, us-east-1a . | ||
7 | Specify the region, for example, us-east-1 . | ||
8 | Specify the infrastructure ID and zone. | ||
9 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
Machine sets running on AWS support non-guaranteed Spot Instances. You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions
to the MachineSet
YAML file.
Sample YAML for a compute machine set custom resource on Azure
This sample YAML defines a compute machine set that runs in the 1
Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
name: <infrastructure_id>-infra-<region> (3)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
spec:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-machineset: <machineset_name> (4)
node-role.kubernetes.io/infra: "" (2)
providerSpec:
value:
apiVersion: azureproviderconfig.openshift.io/v1beta1
credentialsSecret:
name: azure-cloud-credentials
namespace: openshift-machine-api
image: (5)
offer: ""
publisher: ""
resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> (6)
sku: ""
version: ""
internalLoadBalancer: ""
kind: AzureMachineProviderSpec
location: <region> (7)
managedIdentity: <infrastructure_id>-identity (1)
metadata:
creationTimestamp: null
natRule: null
networkResourceGroup: ""
osDisk:
diskSizeGB: 128
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: ""
resourceGroup: <infrastructure_id>-rg (1)
sshPrivateKey: ""
sshPublicKey: ""
tags:
- name: <custom_tag_name> (9)
value: <custom_tag_value> (9)
subnet: <infrastructure_id>-<role>-subnet (1) (2)
userDataSecret:
name: worker-user-data (2)
vmSize: Standard_D4s_v3
vnet: <infrastructure_id>-vnet (1)
zone: "1" (8)
taints: (10)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
You can obtain the subnet by running the following command:
You can obtain the vnet by running the following command:
|
2 | Specify the <infra> node label. |
3 | Specify the infrastructure ID, <infra> node label, and region. |
4 | Optional: Specify the compute machine set name to enable the use of availability sets. This setting only applies to new compute machines. |
5 | Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see “Selecting an Azure Marketplace image”. |
6 | Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. |
7 | Specify the region to place machines on. |
8 | Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. |
9 | Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. |
10 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
Machine sets running on Azure support non-guaranteed Spot VMs. You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions
to the MachineSet
YAML file.
Additional resources
Sample YAML for a compute machine set custom resource on Azure Stack Hub
This sample YAML defines a compute machine set that runs in the 1
Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
name: <infrastructure_id>-infra-<region> (3)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
spec:
metadata:
creationTimestamp: null
labels:
node-role.kubernetes.io/infra: "" (2)
taints: (4)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
providerSpec:
value:
apiVersion: machine.openshift.io/v1beta1
availabilitySet: <availability_set> (6)
credentialsSecret:
name: azure-cloud-credentials
namespace: openshift-machine-api
image:
offer: ""
publisher: ""
resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> (1)
sku: ""
version: ""
internalLoadBalancer: ""
kind: AzureMachineProviderSpec
location: <region> (5)
managedIdentity: <infrastructure_id>-identity (1)
metadata:
creationTimestamp: null
natRule: null
networkResourceGroup: ""
osDisk:
diskSizeGB: 128
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: ""
resourceGroup: <infrastructure_id>-rg (1)
sshPrivateKey: ""
sshPublicKey: ""
subnet: <infrastructure_id>-<role>-subnet (1) (2)
userDataSecret:
name: worker-user-data (2)
vmSize: Standard_DS4_v2
vnet: <infrastructure_id>-vnet (1)
zone: "1" (7)
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
You can obtain the subnet by running the following command:
You can obtain the vnet by running the following command:
|
2 | Specify the <infra> node label. |
3 | Specify the infrastructure ID, <infra> node label, and region. |
4 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
5 | Specify the region to place machines on. |
6 | Specify the availability set for the cluster. |
7 | Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. |
Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs. |
Sample YAML for a compute machine set custom resource on IBM Cloud
This sample YAML defines a compute machine set that runs in a specified IBM Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
name: <infrastructure_id>-<infra>-<region> (3)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> (3)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> (3)
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1
credentialsSecret:
name: ibmcloud-credentials
image: <infrastructure_id>-rhcos (4)
kind: IBMCloudMachineProviderSpec
primaryNetworkInterface:
securityGroups:
- <infrastructure_id>-sg-cluster-wide
- <infrastructure_id>-sg-openshift-net
subnet: <infrastructure_id>-subnet-compute-<zone> (5)
profile: <instance_profile> (6)
region: <region> (7)
resourceGroup: <resource_group> (8)
userDataSecret:
name: <role>-user-data (2)
vpc: <vpc_name> (9)
zone: <zone> (10)
taints: (11)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
1 | The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
|
2 | The <infra> node label. |
3 | The infrastructure ID, <infra> node label, and region. |
4 | The custom Fedora CoreOS (FCOS) image that was used for cluster installation. |
5 | The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. |
6 | Specify the IBM Cloud instance profile. |
7 | Specify the region to place machines on. |
8 | The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. |
9 | The VPC name. |
10 | Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. |
11 | The taint to prevent user workloads from being scheduled on infra nodes. |
Sample YAML for a compute machine set custom resource on GCP
This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-w-a
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: gcpprovider.openshift.io/v1beta1
canIPForward: false
credentialsSecret:
name: gcp-cloud-credentials
deletionProtection: false
disks:
- autoDelete: true
boot: true
image: <path_to_image> (3)
labels: null
sizeGb: 128
type: pd-ssd
gcpMetadata: (4)
- key: <custom_metadata_key>
value: <custom_metadata_value>
kind: GCPMachineProviderSpec
machineType: n1-standard-4
metadata:
creationTimestamp: null
networkInterfaces:
- network: <infrastructure_id>-network
subnetwork: <infrastructure_id>-worker-subnet
projectID: <project_name> (5)
region: us-central1
serviceAccounts:
- email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/cloud-platform
tags:
- <infrastructure_id>-worker
userDataSecret:
name: worker-user-data
zone: us-central1-a
taints: (6)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
1 | For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
|
2 | For <infra> , specify the <infra> node label. |
3 | Specify the path to the image that is used in current compute machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
To use a GCP Marketplace image, specify the offer to use:
|
4 | Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata. |
5 | For <project_name> , specify the name of the GCP project that you use for your cluster. |
6 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
Machine sets running on GCP support non-guaranteed preemptible VM instances. You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible
to the MachineSet
YAML file.
Sample YAML for a compute machine set custom resource on Nutanix
This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
name: <infrastructure_id>-<infra>-<zone> (3)
namespace: openshift-machine-api
annotations: (4)
machine.openshift.io/memoryMb: "16384"
machine.openshift.io/vCPU: "4"
spec:
replicas: 3
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
apiVersion: machine.openshift.io/v1
cluster:
type: uuid
uuid: <cluster_uuid>
credentialsSecret:
name: nutanix-creds-secret
image:
name: <infrastructure_id>-rhcos (5)
type: name
kind: NutanixMachineProviderConfig
memorySize: 16Gi (6)
subnets:
- type: uuid
uuid: <subnet_uuid>
systemDiskSize: 120Gi (7)
userDataSecret:
name: <user_data_secret> (8)
vcpuSockets: 4 (9)
vcpusPerSocket: 1 (10)
taints: (11)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc ) installed, you can obtain the infrastructure ID by running the following command:
|
2 | Specify the <infra> node label. |
3 | Specify the infrastructure ID, <infra> node label, and zone. |
4 | Annotations for the cluster autoscaler. |
5 | Specify the image to use. Use an image from an existing default compute machine set for the cluster. |
6 | Specify the amount of memory for the cluster in Gi. |
7 | Specify the size of the system disk in Gi. |
8 | Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set. |
9 | Specify the number of vCPU sockets. |
10 | Specify the number of vCPUs per socket. |
11 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
Sample YAML for a compute machine set custom resource on OpenStack
This sample YAML defines a compute machine set that runs on OpenStack and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
name: <infrastructure_id>-infra (3)
namespace: openshift-machine-api
spec:
replicas: <number_of_replicas>
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (2)
machine.openshift.io/cluster-api-machine-type: <infra> (2)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
spec:
metadata:
creationTimestamp: null
labels:
node-role.kubernetes.io/infra: ""
taints: (4)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
providerSpec:
value:
apiVersion: openstackproviderconfig.openshift.io/v1alpha1
cloudName: openstack
cloudsSecret:
name: openstack-cloud-credentials
namespace: openshift-machine-api
flavor: <nova_flavor>
image: <glance_image_name_or_location>
serverGroupID: <optional_UUID_of_server_group> (5)
kind: OpenstackProviderSpec
networks: (6)
- filter: {}
subnets:
- filter:
name: <subnet_name>
tags: openshiftClusterID=<infrastructure_id> (1)
primarySubnet: <rhosp_subnet_UUID> (7)
securityGroups:
- filter: {}
name: <infrastructure_id>-worker (1)
serverMetadata:
Name: <infrastructure_id>-worker (1)
openshiftClusterID: <infrastructure_id> (1)
tags:
- openshiftClusterID=<infrastructure_id> (1)
trunk: true
userDataSecret:
name: worker-user-data (2)
availabilityZone: <optional_openstack_availability_zone>
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
|
2 | Specify the <infra> node label. |
3 | Specify the infrastructure ID and <infra> node label. |
4 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
5 | To set a server group policy for the MachineSet, enter the value that is returned from creating a server group. For most deployments, anti-affinity or soft-anti-affinity policies are recommended. |
6 | Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. |
7 | Specify the OpenStack subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. |
Sample YAML for a compute machine set custom resource on oVirt
This sample YAML defines a compute machine set that runs on oVirt and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <role> (2)
machine.openshift.io/cluster-api-machine-type: <role> (2)
name: <infrastructure_id>-<role> (3)
namespace: openshift-machine-api
spec:
replicas: <number_of_replicas> (4)
Selector: (5)
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <role> (2)
machine.openshift.io/cluster-api-machine-type: <role> (2)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
spec:
metadata:
labels:
node-role.kubernetes.io/<role>: "" (2)
providerSpec:
value:
apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1
cluster_id: <ovirt_cluster_id> (6)
template_name: <ovirt_template_name> (7)
sparse: <boolean_value> (8)
format: <raw_or_cow> (9)
cpu: (10)
sockets: <number_of_sockets> (11)
cores: <number_of_cores> (12)
threads: <number_of_threads> (13)
memory_mb: <memory_size> (14)
guaranteed_memory_mb: <memory_size> (15)
os_disk: (16)
size_gb: <disk_size> (17)
storage_domain_id: <storage_domain_UUID> (18)
network_interfaces: (19)
vnic_profile_id: <vnic_profile_id> (20)
credentialsSecret:
name: ovirt-credentials (21)
kind: OvirtMachineProviderSpec
type: <workload_type> (22)
auto_pinning_policy: <auto_pinning_policy> (23)
hugepages: <hugepages> (24)
affinityGroupsNames:
- compute (25)
userDataSecret:
name: worker-user-data
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc ) installed, you can obtain the infrastructure ID by running the following command:
| ||
2 | Specify the node label to add. | ||
3 | Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. | ||
4 | Specify the number of machines to create. | ||
5 | Selector for the machines. | ||
6 | Specify the UUID for the oVirt cluster to which this VM instance belongs. | ||
7 | Specify the oVirt VM template to use to create the machine. | ||
8 | Setting this option to false enables preallocation of disks. The default is true . Setting sparse to true with format set to raw is not available for block storage domains. The raw format writes the entire virtual disk to the underlying physical disk. | ||
9 | Can be set to cow or raw . The default is cow . The cow format is optimized for virtual machines.
| ||
10 | Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads. | ||
11 | Optional: Specify the number of sockets for a VM. | ||
12 | Optional: Specify the number of cores per socket. | ||
13 | Optional: Specify the number of threads per core. | ||
14 | Optional: Specify the size of a VM’s memory in MiB. | ||
15 | Optional: Specify the size of a virtual machine’s guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see Memory Ballooning and Optimization Settings Explained.
| ||
16 | Optional: Root disk of the node. | ||
17 | Optional: Specify the size of the bootable disk in GiB. | ||
18 | Optional: Specify the UUID of the storage domain for the compute node’s disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default) | ||
19 | Optional: List of the network interfaces of the VM. If you include this parameter, OKD discards all network interfaces from the template and creates new ones. | ||
20 | Optional: Specify the vNIC profile ID. | ||
21 | Specify the name of the secret object that holds the oVirt credentials. | ||
22 | Optional: Specify the workload type for which the instance is optimized. This value affects the oVirt VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide. | ||
23 | Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none , resize_and_pin . For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide. | ||
24 | Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576 . For more information, see Configuring Huge Pages in the Virtual Machine Management Guide. | ||
25 | Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt. |
Because oVirt uses a template when creating a VM, if you do not specify a value for an optional parameter, oVirt uses the value for that parameter that is specified in the template. |
Sample YAML for a compute machine set custom resource on vSphere
This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-infra (2)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (2)
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <infra> (3)
machine.openshift.io/cluster-api-machine-type: <infra> (3)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (2)
spec:
metadata:
creationTimestamp: null
labels:
node-role.kubernetes.io/infra: "" (3)
taints: (4)
- key: node-role.kubernetes.io/infra
effect: NoSchedule
providerSpec:
value:
apiVersion: vsphereprovider.openshift.io/v1beta1
credentialsSecret:
name: vsphere-cloud-credentials
diskGiB: 120
kind: VSphereMachineProviderSpec
memoryMiB: 8192
metadata:
creationTimestamp: null
network:
devices:
- networkName: "<vm_network_name>" (5)
numCPUs: 4
numCoresPerSocket: 1
snapshot: ""
template: <vm_template_name> (6)
userDataSecret:
name: worker-user-data
workspace:
datacenter: <vcenter_datacenter_name> (7)
datastore: <vcenter_datastore_name> (8)
folder: <vcenter_vm_folder_path> (9)
resourcepool: <vsphere_resource_pool> (10)
server: <vcenter_server_ip> (11)
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc ) installed, you can obtain the infrastructure ID by running the following command:
|
2 | Specify the infrastructure ID and <infra> node label. |
3 | Specify the <infra> node label. |
4 | Specify a taint to prevent user workloads from being scheduled on infra nodes. |
5 | Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. |
6 | Specify the vSphere VM template to use, such as user-5ddjd-rhcos . |
7 | Specify the vCenter Datacenter to deploy the compute machine set on. |
8 | Specify the vCenter Datastore to deploy the compute machine set on. |
9 | Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . |
10 | Specify the vSphere resource pool for your VMs. |
11 | Specify the vCenter server IP or fully qualified domain name. |
Creating a compute machine set
In addition to the ones created by the installation program, you can create your own compute machine sets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
Deploy an OKD cluster.
Install the OpenShift CLI (
oc
).Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
Check values of a specific compute machine set:
$ oc get machineset <machineset_name> -n \
openshift-machine-api -o yaml
Example output
...
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
machine.openshift.io/cluster-api-machine-role: worker (2)
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
1 The cluster ID. 2 A default node label.
Create the new
MachineSet
CR:$ oc create -f <file_name>.yaml
View the list of compute machine sets:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
Creating an infrastructure node
See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. |
Requirements of the cluster dictate that infrastructure, also called infra
nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app
, nodes through labeling.
Procedure
Add a label to the worker node that you want to act as application node:
$ oc label node <node-name> node-role.kubernetes.io/app=""
Add a label to the worker nodes that you want to act as infrastructure nodes:
$ oc label node <node-name> node-role.kubernetes.io/infra=""
Check to see if applicable nodes now have the
infra
role andapp
roles:$ oc get nodes
Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.
If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.
However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as
node-role.kubernetes.io/infra=””
, when a pod’s label is set to a different node role, such asnode-role.kubernetes.io/master=””
, can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles.You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.
Edit the
Scheduler
object:$ oc edit scheduler cluster
Add the
defaultNodeSelector
field with the appropriate node selector:apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
name: cluster
...
spec:
defaultNodeSelector: topology.kubernetes.io/region=us-east-1 (1)
...
1 This example node selector deploys pods on nodes in the us-east-1
region by default.Save the file to apply the changes.
You can now move infrastructure resources to the newly labeled infra
nodes.
Additional resources
Creating a machine config pool for infrastructure machines
If you need infrastructure machines to have dedicated configurations, you must create an infra pool.
Procedure
Add a label to the node you want to assign as the infra node with a specific label:
$ oc label node <node_name> <label>
$ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
Create a machine config pool that contains both the worker role and your custom role as machine config selector:
$ cat infra.mcp.yaml
Example output
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: infra
spec:
machineConfigSelector:
matchExpressions:
- {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} (1)
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: "" (2)
1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector
.Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.
After you have the YAML file, you can create the machine config pool:
$ oc create -f infra.mcp.yaml
Check the machine configs to ensure that the infrastructure configuration rendered successfully:
$ oc get machineconfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED
00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
99-master-ssh 3.2.0 31d
99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
99-worker-ssh 3.2.0 31d
rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m
rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d
rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d
You should see a new machine config, with the
rendered-infra-*
prefix.Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as
infra
. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.
Create a machine config:
$ cat infra.mc.yaml
Example output
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 51-infra
labels:
machineconfiguration.openshift.io/role: infra (1)
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- path: /etc/infratest
mode: 0644
contents:
source: data:,infra
1 Add the label you added to the node as a nodeSelector
.Apply the machine config to the infra-labeled nodes:
$ oc create -f infra.mc.yaml
Confirm that your new machine config pool is available:
$ oc get mcp
Example output
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s
master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m
worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m
In this example, a worker node was changed to an infra node.
Additional resources
- See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool.
Assigning machine set resources to infrastructure nodes
After creating an infrastructure machine set, the worker
and infra
roles are applied to new infra nodes. Nodes with the infra
role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker
role is also applied.
However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control.
Binding infrastructure node workloads using taints and tolerations
If you have an infra node that has the infra
and worker
roles assigned, you must configure the node so that user workloads are not assigned to it.
It is recommended that you preserve the dual |
Prerequisites
- Configure additional
MachineSet
objects in your OKD cluster.
Procedure
Add a taint to the infra node to prevent scheduling user workloads on it:
Determine if the node has the taint:
$ oc describe nodes <node_name>
Sample output
oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
Roles: worker
...
Taints: node-role.kubernetes.io/infra:NoSchedule
...
This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.
If you have not configured a taint to prevent scheduling user workloads on it:
$ oc adm taint nodes <node_name> <key>:<effect>
For example:
$ oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule
You can alternatively apply the following YAML to add the taint:
kind: Node
apiVersion: v1
metadata:
name: <node_name>
labels:
…
spec:
taints:
- key: node-role.kubernetes.io/infra
effect: NoSchedule
…
This example places a taint on
node1
that has keynode-role.kubernetes.io/infra
and taint effectNoSchedule
. Nodes with theNoSchedule
effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.If a descheduler is used, pods violating node taints could be evicted from the cluster.
Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the
Pod
object specification:tolerations:
- effect: NoSchedule (1)
key: node-role.kubernetes.io/infra (2)
operator: Exists (3)
1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists
Operator to require a taint with the keynode-role.kubernetes.io/infra
to be present on the node.This toleration matches the taint created by the
oc adm taint
command. A pod with this toleration can be scheduled onto the infra node.Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.
Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details.
Additional resources
See Controlling pod placement using the scheduler for general information on scheduling a pod to a node.
See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes.
Moving resources to infrastructure machine sets
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown:
spec:
nodePlacement: (1)
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. |
Applying a specific node selector to all infrastructure components causes OKD to schedule those workloads on nodes with that label.
Moving the router
You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node.
Prerequisites
- Configure additional compute machine sets in your OKD cluster.
Procedure
View the
IngressController
custom resource for the router Operator:$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
The command output resembles the following text:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: 2019-04-18T12:35:39Z
finalizers:
- ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
generation: 1
name: default
namespace: openshift-ingress-operator
resourceVersion: "11341"
selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
uid: 79509e05-61d6-11e9-bc55-02ce4781844a
spec: {}
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-04-18T12:36:15Z
status: "True"
type: Available
domain: apps.<cluster>.example.com
endpointPublishingStrategy:
type: LoadBalancerService
selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
Edit the
ingresscontroller
resource and change thenodeSelector
to use theinfra
label:$ oc edit ingresscontroller default -n openshift-ingress-operator
spec:
nodePlacement:
nodeSelector: (1)
matchLabels:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
1 Add a nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.Confirm that the router pod is running on the
infra
node.View the list of router pods and note the node name of the running pod:
$ oc get pod -n openshift-ingress -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none>
router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
In this example, the running pod is on the
ip-10-0-217-226.ec2.internal
node.View the node status of the running pod:
$ oc get node <node_name> (1)
1 Specify the <node_name>
that you obtained from the pod list.Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.25.0
Because the role list includes
infra
, the pod is running on the correct node.
Moving the default registry
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional compute machine sets in your OKD cluster.
Procedure
View the
config/instance
object:$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
Example output
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
creationTimestamp: 2019-02-05T13:52:05Z
finalizers:
- imageregistry.operator.openshift.io/finalizer
generation: 1
name: cluster
resourceVersion: "56174"
selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
uid: 36fd3724-294d-11e9-a524-12ffeee2931b
spec:
httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
logging: 2
managementState: Managed
proxy: {}
replicas: 1
requests:
read: {}
write: {}
storage:
s3:
bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
region: us-east-1
status:
...
Edit the
config/instance
object:$ oc edit configs.imageregistry.operator.openshift.io/cluster
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
namespaces:
- openshift-image-registry
topologyKey: kubernetes.io/hostname
weight: 100
logLevel: Normal
managementState: Managed
nodeSelector: (1)
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
1 Add a nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
$ oc get pods -o wide -n openshift-image-registry
Confirm the node has the label you specified:
$ oc describe node <node_name>
Review the command output and confirm that
node-role.kubernetes.io/infra
is in theLABELS
list.
Moving the monitoring solution
The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
Procedure
Edit the
cluster-monitoring-config
config map and change thenodeSelector
to use theinfra
label:$ oc edit configmap cluster-monitoring-config -n openshift-monitoring
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |+
alertmanagerMain:
nodeSelector: (1)
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
prometheusK8s:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
prometheusOperator:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
k8sPrometheusAdapter:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
kubeStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
telemeterClient:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
openshiftStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
thanosQuerier:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
1 Add a nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.Watch the monitoring pods move to the new machines:
$ watch 'oc get pod -n openshift-monitoring -o wide'
If a component has not moved to the
infra
node, delete the pod with this component:$ oc delete pod -n openshift-monitoring <pod>
The component from the deleted pod is re-created on the
infra
node.
Moving OpenShift Logging resources
You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
Prerequisites
- The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:$ oc edit ClusterLogging instance
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...
spec:
collection:
logs:
fluentd:
resources: null
type: fluentd
logStore:
elasticsearch:
nodeCount: 3
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: 500m
memory: 16Gi
requests:
cpu: 500m
memory: 16Gi
storage: {}
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
proxy:
resources: null
replicas: 1
resources: null
type: kibana
...
1 Add a nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
in the format shown or use<key>: <value>
pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Verification
To verify that a component has moved, you can use the oc get pod -o wide
command.
For example:
You want to move the Kibana pod from the
ip-10-0-147-79.us-east-2.compute.internal
node:$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
You want to move the Kibana pod to the
ip-10-0-139-48.us-east-2.compute.internal
node, a dedicated infrastructure node:$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.25.0
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.25.0
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.25.0
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.25.0
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.25.0
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.25.0
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.25.0
Note that the node has a
node-role.kubernetes.io/infra: ''
label:$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
Example output
kind: Node
apiVersion: v1
metadata:
name: ip-10-0-139-48.us-east-2.compute.internal
selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
resourceVersion: '39083'
creationTimestamp: '2020-04-13T19:07:55Z'
labels:
node-role.kubernetes.io/infra: ''
...
To move the Kibana pod, edit the
ClusterLogging
CR to add a node selector:apiVersion: logging.openshift.io/v1
kind: ClusterLogging
...
spec:
...
visualization:
kibana:
nodeSelector: (1)
node-role.kubernetes.io/infra: ''
proxy:
resources: null
replicas: 1
resources: null
type: kibana
1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
fluentd-42dzz 1/1 Running 0 28m
fluentd-d74rq 1/1 Running 0 28m
fluentd-m5vr9 1/1 Running 0 28m
fluentd-nkxl7 1/1 Running 0 28m
fluentd-pdvqb 1/1 Running 0 28m
fluentd-tflh6 1/1 Running 0 28m
kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
The new pod is on the
ip-10-0-139-48.us-east-2.compute.internal
node:$ oc get pod kibana-7d85dcffc8-bfpfp -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
After a few moments, the original Kibana pod is removed.
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
fluentd-42dzz 1/1 Running 0 29m
fluentd-d74rq 1/1 Running 0 29m
fluentd-m5vr9 1/1 Running 0 29m
fluentd-nkxl7 1/1 Running 0 29m
fluentd-pdvqb 1/1 Running 0 29m
fluentd-tflh6 1/1 Running 0 29m
kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s
Additional resources
- See the monitoring documentation for the general instructions on moving OKD components.