Working with bundle images
- Bundling an Operator
- Deploying an Operator with Operator Lifecycle Manager
- Publishing a catalog containing a bundled Operator
- Testing an Operator upgrade on Operator Lifecycle Manager
- Controlling Operator compatibility with OKD versions
- Additional resources
You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).
Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
Operator SDK CLI installed on a development workstation
OpenShift CLI (
oc
) v4.9+ installedOperator project initialized by using the Operator SDK
If your Operator is Go-based, your project must be updated to use supported images for running on OKD
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Update your
Makefile
by setting theIMG
URL to your Operator image name and tag that you pushed:$ # Image URL to use all building/pushing image targets
IMG ?= <registry>/<user>/<operator_image_name>:<tag>
This value is used for subsequent operations.
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
objectA bundle metadata directory named
bundle/metadata
All custom resource definitions (CRDs) in a
config/crd
directoryA Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OKD and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
Operator SDK CLI installed on a development workstation
Operator bundle image built and pushed to a registry
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OKD 4.9)Logged in to the cluster with
oc
using an account withcluster-admin
permissionsIf your Operator is Go-based, your project must be updated to use supported images for running on OKD
Procedure
Check the status of OLM on your cluster by using the following Operator SDK command:
$ operator-sdk olm status \
--olm-namespace=openshift-operator-lifecycle-manager
Run the Operator on your cluster by using the OLM integration in Operator SDK:
$ operator-sdk run bundle \
[-n <namespace>] \(1)
<registry>/<user>/<bundle_image_name>:<tag>
1 By default, the command installs the Operator in the currently active project in your ~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation.This command performs the following actions:
Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required objects, including RBAC.
Publishing a catalog containing a bundled Operator
To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.
The Operator SDK uses the |
Prerequisites
Operator SDK CLI installed on a development workstation
Operator bundle image built and pushed to a registry
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OKD 4.9)Logged in to the cluster with
oc
using an account withcluster-admin
permissions
Procedure
Run the following
make
command in your Operator project directory to build an index image containing your Operator bundle:$ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
where the
CATALOG_IMG
argument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Push the built index image to a repository:
$ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
You can use Operator SDK
make
commands together if you would rather perform multiple actions in sequence at once. For example, if you had not yet built a bundle image for your Operator project, you can build and push both a bundle image and an index image with the following syntax:$ make bundle-build bundle-push catalog-build catalog-push \
BUNDLE_IMG=<bundle_image_pull_spec> \
CATALOG_IMG=<index_image_pull_spec>
Alternatively, you can set the
IMAGE_TAG_BASE
field in yourMakefile
to an existing repository:IMAGE_TAG_BASE=quay.io/example/my-operator
You can then use the following syntax to build and push images with automatically-generated names, such as
quay.io/example/my-operator-bundle:v0.0.1
for the bundle image andquay.io/example/my-operator-catalog:v0.0.1
for the index image:$ make bundle-build bundle-push catalog-build catalog-push
Define a
CatalogSource
object that references the index image you just generated, and then create the object by using theoc apply
command or web console:Example
CatalogSource
YAMLapiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: cs-memcached
namespace: default
spec:
displayName: My Test
publisher: Company
sourceType: grpc
image: quay.io/example/memcached-catalog:v0.0.1 (1)
updateStrategy:
registryPoll:
interval: 10m
1 Set image
to the image pull spec you used previously with theCATALOG_IMG
argument.Check the catalog source:
$ oc get catalogsource
Example output
NAME DISPLAY TYPE PUBLISHER AGE
cs-memcached My Test grpc Company 4h31m
Verification
Install the Operator using your catalog:
Define an
OperatorGroup
object and create it by using theoc apply
command or web console:Example
OperatorGroup
YAMLapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: my-test
namespace: default
spec:
targetNamespaces:
- default
Define a
Subscription
object and create it by using theoc apply
command or web console:Example
Subscription
YAMLapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: catalogtest
namespace: default
spec:
channel: "alpha"
installPlanApproval: Manual
name: catalog
source: cs-memcached
sourceNamespace: default
startingCSV: memcached-operator.v0.0.1
Verify the installed Operator is running:
Check the Operator group:
$ oc get og
Example output
NAME AGE
my-test 4h40m
Check the cluster service version (CSV):
$ oc get csv
Example output
NAME DISPLAY VERSION REPLACES PHASE
memcached-operator.v0.0.1 Test 0.0.1 Succeeded
Check the pods for the Operator:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m
catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m
cs-memcached-7622r 1/1 Running 0 4h33m
Additional resources
- See Managing custom catalogs for details on direct usage of the
opm
CLI for more advanced use cases.
Testing an Operator upgrade on Operator Lifecycle Manager
You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.
The run bundle-upgrade
subcommand automates triggering an installed Operator to upgrade to a later version by specifying a bundle image for the later version.
Prerequisites
Operator installed with OLM either by using the
run bundle
subcommand or with traditional OLM installationA bundle image that represents a later version of the installed Operator
Procedure
If your Operator has not already been installed with OLM, install the earlier version either by using the
run bundle
subcommand or with traditional OLM installation.If the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the
run bundle-upgrade
subcommand will cause the registry pod to fail because the newer bundle is already referenced by the index that provides the package and cluster service version (CSV).For example, you can use the following
run bundle
subcommand for a Memcached Operator by specifying the earlier bundle image:$ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1
Example output
INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-1
INFO[0009] Created CatalogSource: memcached-operator-catalog
INFO[0010] OperatorGroup "operator-sdk-og" created
INFO[0010] Created Subscription: memcached-operator-v0-0-1-sub
INFO[0013] Approved InstallPlan install-bqggr for the Subscription: memcached-operator-v0-0-1-sub
INFO[0013] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to reach 'Succeeded' phase
INFO[0013] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to appear
INFO[0019] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Succeeded
Upgrade the installed Operator by specifying the bundle image for the later Operator version:
$ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2
Example output
INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project
INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project
INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-2
INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations
INFO[0010] Deleted previous registry pod with name "quay-io-demo-memcached-operator-v0-0-1"
INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub
INFO[0042] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.2" to reach 'Succeeded' phase
INFO[0042] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: InstallReady
INFO[0043] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Installing
INFO[0044] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Succeeded
INFO[0044] Successfully upgraded to "memcached-operator.v0.0.2"
Clean up the installed Operators:
$ operator-sdk cleanup memcached-operator
Additional resources
Controlling Operator compatibility with OKD versions
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OKD cluster is upgraded to the Kubernetes version where the API has been removed. As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OKD that would make the Operator incompatible. |
When an API is removed from an OKD version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.
You can check the event alerts of your Operators to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:
|
If a cluster administrator has installed your Operator, before they upgrade to the next version of OKD, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OKD, ensure that the bundle is configured accordingly.
The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OKD. These steps also prevent administrators from upgrading to a newer version of OKD that is incompatible with the version of your Operator that is currently installed on their cluster.
This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OKD version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.
Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OKD where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible. |
Prerequisites
- An existing Operator project
Procedure
If you know that a specific bundle of your Operator is not supported and will not work correctly on OKD later than a certain cluster version, configure the maximum version of OKD that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the
olm.maxOpenShiftVersion
annotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:Example CSV with
olm.maxOpenShiftVersion
annotationapiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
"olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]' (1)
1 Specify the maximum cluster version of OKD that your Operator is compatible with. For example, setting value
to4.9
prevents cluster upgrades to OKD versions later than 4.9 when this bundle is installed on a cluster.If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OKD for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OKD:
This step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see “Red Hat-provided Operator catalogs”.
Set the
com.redhat.openshift.versions
annotation in your project’sbundle/metadata/annotations.yaml
file:Example
bundle/metadata/annotations.yaml
file with compatible versionscom.redhat.openshift.versions: "v4.7-v4.9" (1)
1 Set to a range or single version. To prevent your bundle from being carried on to an incompatible version of OKD, ensure that the index image is generated with the proper
com.redhat.openshift.versions
label in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update thebundle.Dockerfile
file:Example
bundle.Dockerfile
with compatible versionsLABEL com.redhat.openshift.versions="<versions>" (1)
1 Set to a range or single version, for example, v4.7-v4.9
. This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range.
You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.
Additional resources
Managing OpenShift Versions in the Certified Operator Build Guide
Additional resources
See Operator Framework packaging format for details on the bundle format.
See Managing custom catalogs for details on adding bundle images to index images by using the
opm
command.See Operator Lifecycle Manager workflow for details on how upgrades work for installed Operators.