- Adding Operators to a cluster
- Prerequisites
- About Operator installation with OperatorHub
- Installing from OperatorHub using the web console
- Installing from OperatorHub using the CLI
- Installing a specific version of an Operator
- Installing a specific version of an Operator in the web console
- Preparing for multiple instances of an Operator for multitenant clusters
- Installing global Operators in custom namespaces
- Pod placement of Operator workloads
- Controlling where an Operator is installed
Adding Operators to a cluster
Using Operator Lifecycle Manager (OLM), cluster administrators can install OLM-based Operators to an OKD cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation. |
Prerequisites
Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager as shown in Obtaining the installation program in the installation documentation for your platform.
If you have the pull secret, add the
redhat-operators
catalog to theOperatorHub
custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.
About Operator installation with OperatorHub
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a cluster administrator, you can install an Operator from OperatorHub by using the OKD web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.
During installation, you must determine the following initial settings for the Operator:
Installation Mode
Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces… to make the Operator available to all users and projects.
Update Channel
If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
Additional resources
Installing from OperatorHub using the web console
You can install and subscribe to an Operator from OperatorHub by using the OKD web console.
Prerequisites
- Access to an OKD cluster using an account with
cluster-admin
permissions.
Procedure
Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
advanced
to find the Advanced Cluster Management for Kubernetes Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
Read the information about the Operator and click Install.
On the Install Operator page:
Select one of the following:
All namespaces on the cluster (default) installs the Operator in the default
openshift-operators
namespace to watch and be made available to all namespaces in the cluster. This option is not always available.A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
If the cluster is in AWS STS mode, enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field.
To create the role’s ARN, follow the procedure described in Preparing AWS account.
If more than one update channel is available, select an Update channel.
Select Automatic or Manual approval strategy, as described earlier.
If the web console shows that the cluster is in “STS mode”, you must set Update approval to Manual.
Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
Click Install to make the Operator available to the selected namespaces on this OKD cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.
For the All namespaces… installation mode, the status resolves to InstallSucceeded in the
openshift-operators
namespace, but the status is Copied if you check in other namespaces.If it does not:
- Check the logs in any pods in the
openshift-operators
project (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
- Check the logs in any pods in the
Installing from OperatorHub using the CLI
Instead of using the OKD web console, you can install an Operator from OperatorHub by using the CLI. Use the oc
command to create or update a Subscription
object.
Prerequisites
Access to an OKD cluster using an account with
cluster-admin
permissions.You have installed the OpenShift CLI (
oc
).
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
Example output
NAME CATALOG AGE
3scale-operator Red Hat Operators 91m
advanced-cluster-management Red Hat Operators 91m
amq7-cert-manager Red Hat Operators 91m
...
couchbase-enterprise-certified Certified Operators 91m
crunchy-postgres-operator Certified Operators 91m
mongodb-enterprise Certified Operators 91m
...
etcd Community Operators 91m
jaeger Community Operators 91m
kubefed Community Operators 91m
...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
An Operator group, defined by an
OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespaces
orSingleNamespace
mode. If the Operator you intend to install uses theAllNamespaces
, then theopenshift-operators
namespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespace
mode and you do not already have an appropriate Operator group in place, you must create one.The web console version of this procedure handles the creation of the
OperatorGroup
andSubscription
objects automatically behind the scenes for you when choosingSingleNamespace
mode.Create an
OperatorGroup
object YAML file, for exampleoperatorgroup.yaml
:Example
OperatorGroup
objectapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: <operatorgroup_name>
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>
Create the
OperatorGroup
object:$ oc apply -f operatorgroup.yaml
Create a
Subscription
object YAML file to subscribe a namespace to an Operator, for examplesub.yaml
:Example
Subscription
objectapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: <subscription_name>
namespace: openshift-operators (1)
spec:
channel: <channel_name> (2)
name: <operator_name> (3)
source: redhat-operators (4)
sourceNamespace: openshift-marketplace (5)
config:
env: (6)
- name: ARGS
value: "-v=10"
envFrom: (7)
- secretRef:
name: license-secret
volumes: (8)
- name: <volume_name>
configMap:
name: <configmap_name>
volumeMounts: (9)
- mountPath: <directory_name>
name: <volume_name>
tolerations: (10)
- operator: "Exists"
resources: (11)
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
nodeSelector: (12)
foo: bar
1 For default AllNamespaces
install mode usage, specify theopenshift-operators
namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace forSingleNamespace
install mode usage.2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace
for the default OperatorHub catalog sources.6 The env
parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM.7 The envFrom
parameter defines a list of sources to populate Environment Variables in the container.8 The volumes
parameter defines a list of Volumes that must exist on the pod created by OLM.9 The volumeMounts
parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If avolumeMount
references avolume
that does not exist, OLM fails to deploy the Operator.10 The tolerations
parameter defines a list of Tolerations for the pod created by OLM.11 The resources
parameter defines resource constraints for all the containers in the pod created by OLM.12 The nodeSelector
parameter defines aNodeSelector
for the pod created by OLM.If the cluster is in STS mode, include the following fields in the
Subscription
object:kind: Subscription
# ...
spec:
installPlanApproval: Manual (1)
config:
env:
- name: ROLEARN
value: "<role_arn>" (2)
1 Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. 2 Include the role ARN details. Create the
Subscription
object:$ oc apply -f sub.yaml
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Additional resources
Installing a specific version of an Operator
You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription
object.
Prerequisites
Access to an OKD cluster using an account with
cluster-admin
permissionsYou have installed the OpenShift CLI (
oc
).
Procedure
Look up the available versions and channels of the Operator you want to install by running the following command:
Command syntax
$ oc describe packagemanifests <operator_name> -n <catalog_namespace>
For example, the following command prints the available channels and versions of the Red Hat Quay Operator from OperatorHub:
Example command
$ oc describe packagemanifests quay-operator -n openshift-marketplace
Example output
Name: quay-operator
Namespace: operator-marketplace
Labels: catalog=redhat-operators
catalog-namespace=openshift-marketplace
hypershift.openshift.io/managed=true
operatorframework.io/arch.amd64=supported
operatorframework.io/os.linux=supported
provider=Red Hat
provider-url=
Annotations: <none>
API Version: packages.operators.coreos.com/v1
Kind: PackageManifest
...
Current CSV: quay-operator.v3.7.11
...
Entries:
Name: quay-operator.v3.7.11
Version: 3.7.11
Name: quay-operator.v3.7.10
Version: 3.7.10
Name: quay-operator.v3.7.9
Version: 3.7.9
Name: quay-operator.v3.7.8
Version: 3.7.8
Name: quay-operator.v3.7.7
Version: 3.7.7
Name: quay-operator.v3.7.6
Version: 3.7.6
Name: quay-operator.v3.7.5
Version: 3.7.5
Name: quay-operator.v3.7.4
Version: 3.7.4
Name: quay-operator.v3.7.3
Version: 3.7.3
Name: quay-operator.v3.7.2
Version: 3.7.2
Name: quay-operator.v3.7.1
Version: 3.7.1
Name: quay-operator.v3.7.0
Version: 3.7.0
Name: stable-3.7
...
Current CSV: quay-operator.v3.8.5
...
Entries:
Name: quay-operator.v3.8.5
Version: 3.8.5
Name: quay-operator.v3.8.4
Version: 3.8.4
Name: quay-operator.v3.8.3
Version: 3.8.3
Name: quay-operator.v3.8.2
Version: 3.8.2
Name: quay-operator.v3.8.1
Version: 3.8.1
Name: quay-operator.v3.8.0
Version: 3.8.0
Name: stable-3.8
Default Channel: stable-3.8
Package Name: quay-operator
You can print an Operator’s version and channel information in the YAML format by running the following command:
$ oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:
$ oc get packagemanifest \
--selector=catalog=<catalogsource_name> \
--field-selector metadata.name=<operator_name> \
-n <catalog_namespace> -o yaml
If you do not specify the Operator’s catalog, running the
oc get packagemanifest
andoc describe packagemanifest
commands might return a package from an unexpected catalog if the following conditions are met:Multiple catalogs are installed in the same namespace.
The catalogs contain the same Operators or Operators with the same name.
An Operator group, defined by an
OperatorGroup
object, selects target namespaces in which to generate required role-based access control (RBAC) access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespaces
orSingleNamespace
mode. If the Operator you intend to install uses theAllNamespaces
mode, then theopenshift-operators
namespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespace
mode and you do not already have an appropriate Operator group in place, you must create one:Create an
OperatorGroup
object YAML file, for exampleoperatorgroup.yaml
:Example
OperatorGroup
objectapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: <operatorgroup_name>
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>
Create the
OperatorGroup
object:$ oc apply -f operatorgroup.yaml
Create a
Subscription
object YAML file that subscribes a namespace to an Operator with a specific version by setting thestartingCSV
field. Set theinstallPlanApproval
field toManual
to prevent the Operator from automatically upgrading if a later version exists in the catalog.For example, the following
sub.yaml
file can be used to install the Red Hat Quay Operator specifically to version 3.7.10:Subscription with a specific starting Operator version
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: quay-operator
namespace: quay
spec:
channel: quay-operator.v3.7.10
installPlanApproval: Manual (1)
name: quay-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: quay-operator.v3.7.10 (2)
1 Set the approval strategy to Manual
in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation.2 Set a specific version of an Operator CSV. Create the
Subscription
object:$ oc apply -f sub.yaml
Manually approve the pending install plan to complete the Operator installation.
Additional resources
Installing a specific version of an Operator in the web console
You can install a specific version of an Operator by using the OperatorHub in the web console. You are able to browse the various versions of an operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install.
Prerequisites
- You must have administrator privileges.
Procedure
From the web console, click Operators → OperatorHub.
Select an Operator you want to install.
From the selected Operator, you can select a Channel and Version from the lists.
The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise Manual approval is required when not installing the latest version for the selected channel.
Manual approval applies to all operators installed in a namespace.
Installing an Operator with manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. Install Operators into separate namespaces for updating independently.
Click Install
Verification
When the operator is installed, the metadata indicates which channel and version are installed.
The channel and version dropdown menus are still available for viewing other version metadata in this catalog context.
Preparing for multiple instances of an Operator for multitenant clusters
As a cluster administrator, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see “Operators in multitenant clusters”.
In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.
Prerequisites
All instances of the Operator you want to install must be the same version across a given cluster.
For more information on this and other limitations, see “Operators in multitenant clusters”.
Procedure
Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. For example, if the tenant’s namespace is
team1
, you might create ateam1-operator
namespace:Define a
Namespace
resource and save the YAML file, for example,team1-operator.yaml
:apiVersion: v1
kind: Namespace
metadata:
name: team1-operator
Create the namespace by running the following command:
$ oc create -f team1-operator.yaml
Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the
spec.targetNamespaces
list:Define an
OperatorGroup
resource and save the YAML file, for example,team1-operatorgroup.yaml
:apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: team1-operatorgroup
namespace: team1-operator
spec:
targetNamespaces:
- team1 (1)
1 Define only the tenant’s namespace in the spec.targetNamespaces
list.Create the Operator group by running the following command:
$ oc create -f team1-operatorgroup.yaml
Next steps
Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, see Installing from OperatorHub using the web console.
After completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.
Additional resources
Installing global Operators in custom namespaces
When installing Operators with the OKD web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators
global namespace. This can cause issues related to shared install plans and update policies between all Operators in the namespace. For more details on these limitations, see “Multitenancy and Operator colocation”.
As a cluster administrator, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.
Prerequisites
- You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Before installing the Operator, create a namespace for the installation of your desired Operator. This installation namespace will become the custom global namespace:
Define a
Namespace
resource and save the YAML file, for example,global-operators.yaml
:apiVersion: v1
kind: Namespace
metadata:
name: global-operators
Create the namespace by running the following command:
$ oc create -f global-operators.yaml
Create a custom global Operator group, which is an Operator group that watches all namespaces:
Define an
OperatorGroup
resource and save the YAML file, for example,global-operatorgroup.yaml
. Omit both thespec.selector
andspec.targetNamespaces
fields to make it a global Operator group, which selects all namespaces:apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: global-operatorgroup
namespace: global-operators
The
status.namespaces
of a created global Operator group contains the empty string (“”
), which signals to a consuming Operator that it should watch all namespaces.Create the Operator group by running the following command:
$ oc create -f global-operatorgroup.yaml
Next steps
Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, this task can only be performed with the OpenShift CLI (
oc
). For a detailed procedure, see Installing from OperatorHub using the CLI.When you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans.
Additional resources
Pod placement of Operator workloads
By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.
Controlling pod placement of Operator and Operand workloads has the following prerequisites:
Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as
node-role.kubernetes.io/app
, that identifies the node or nodes. Otherwise, add a label, such asmyoperator
, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project.If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a
myoperator:NoSchedule
taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain.Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.
At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:
For Operator pods
Administrators can create a Subscription
object in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes.
For Operand pods
Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.
Additional resources
Adding taints and tolerations manually to nodes or with compute machine sets
Controlling where an Operator is installed
By default, when you install an Operator, OKD installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.
The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:
If an Operator requires a particular platform, such as
amd64
orarm64
If an Operator requires a particular operating system, such as Linux or Windows
If you want Operators that work together scheduled on the same host or on hosts located on the same rack
If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues
You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s Subscription
object. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. Pod affinity enables you to ensure that related pods are scheduled to the same node. Pod anti-affinity allows you to prevent a pod from being scheduled on a node.
The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:
Node affinity example that places the Operator pod on a specific node
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-163-94.us-west-2.compute.internal
#...
1 | A node affinity that requires the Operator’s pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal . |
Node affinity example that places the Operator pod on a node with a specific platform
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- key: kubernetes.io/os
operator: In
values:
- linux
#...
1 | A node affinity that requires the Operator’s pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels. |
Pod affinity example that places the Operator pod on one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test
topologyKey: kubernetes.io/hostname
#...
1 | A pod affinity that places the Operator’s pod on a node that has pods with the app=test label. |
Pod anti-affinity example that prevents the Operator pod from one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAntiAffinity: (1)
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: cpu
operator: In
values:
- high
topologyKey: kubernetes.io/hostname
#...
1 | A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the cpu=high label. |
Procedure
To control the placement of an Operator pod, complete the following steps:
Install the Operator as usual.
If needed, ensure that your nodes are labeled to properly respond to the affinity.
Edit the Operator
Subscription
object to add an affinity:apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity: (1)
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-185-229.ec2.internal
#...
1 Add a nodeAffinity
,podAffinity
, orpodAntiAffinity
. See the Additional resources section that follows for information about creating the affinity.
Verification
To ensure that the pod is deployed on the specific node, run the following command:
$ oc get pods -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>
Additional resources