Operator SDK tutorial for Go-based Operators
Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
Operator SDK
The operator-sdk
CLI tool and controller-runtime
library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators. |
Prerequisites
Operator SDK CLI installed
OpenShift CLI (
oc
) 4+ installedGo 1.19+
Logged into an OKD 4 cluster with
oc
with an account that hascluster-admin
permissionsTo allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
Creating a project
Use the Operator SDK CLI to create a project called memcached-operator
.
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operator
Change to the directory:
$ cd $HOME/projects/memcached-operator
Activate support for Go modules:
$ export GO111MODULE=on
Run the
operator-sdk init
command to initialize the project:$ operator-sdk init \
--domain=example.com \
--repo=github.com/example-inc/memcached-operator
The
operator-sdk init
command uses the Go plugin by default.The
operator-sdk init
command generates ago.mod
file to be used with Go modules. The--repo
flag is required when creating a project outside of$GOPATH/src/
, because generated files require a valid module path.
PROJECT file
Among the files generated by the operator-sdk init
command is a Kubebuilder PROJECT
file. Subsequent operator-sdk
commands, as well as help
output, that are run from the project root read this file and are aware that the project type is Go. For example:
domain: example.com
layout:
- go.kubebuilder.io/v3
projectName: memcached-operator
repo: github.com/example-inc/memcached-operator
version: "3"
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
sdk.x-openshift.io/v1: {}
About the Manager
The main program for the Operator is the main.go
file, which initializes and runs the Manager. The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks.
The Manager can restrict the namespace that all controllers watch for resources:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace
option empty:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces:
var namespaces []string (1)
mgr, err := ctrl.NewManager(cfg, manager.Options{ (2)
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
1 | List of namespaces. |
2 | Creates a Cmd struct to provide shared dependencies and start components. |
About multi-group APIs
Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:
$ operator-sdk edit --multigroup=true
This command updates the PROJECT
file, which should look like the following example:
domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...
For multi-group projects, the API Go type files are created in the apis/<group>/<version>/
directory, and the controllers are created in the controllers/<group>/
directory. The Dockerfile is then updated accordingly.
Additional resource
- For more details on migrating to a multi-group project, see the Kubebuilder documentation.
Creating an API and controller
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API with group
cache
, version,v1
, and kindMemcached
:$ operator-sdk create api \
--group=cache \
--version=v1 \
--kind=Memcached
When prompted, enter
y
for creating both the resource and controller:Create Resource [y/n]
y
Create Controller [y/n]
y
Example output
Writing scaffold for you to edit...
api/v1/memcached_types.go
controllers/memcached_controller.go
...
This process generates the Memcached
resource API at api/v1/memcached_types.go
and the controller at controllers/memcached_controller.go
.
Defining the API
Define the API for the Memcached
custom resource (CR).
Procedure
Modify the Go type definitions at
api/v1/memcached_types.go
to have the followingspec
andstatus
:// MemcachedSpec defines the desired state of Memcached
type MemcachedSpec struct {
// +kubebuilder:validation:Minimum=0
// Size is the size of the memcached deployment
Size int32 `json:"size"`
}
// MemcachedStatus defines the observed state of Memcached
type MemcachedStatus struct {
// Nodes are the names of the memcached pods
Nodes []string `json:"nodes"`
}
Update the generated code for the resource type:
$ make generate
After you modify a
*_types.go
file, you must run themake generate
command to update the generated code for that resource type.The above Makefile target invokes the
controller-gen
utility to update theapi/v1/zz_generated.deepcopy.go
file. This ensures your API Go type definitions implement theruntime.Object
interface that all Kind types must implement.
Generating CRD manifests
After the API is defined with spec
and status
fields and custom resource definition (CRD) validation markers, you can generate CRD manifests.
Procedure
Run the following command to generate and update CRD manifests:
$ make manifests
This Makefile target invokes the
controller-gen
utility to generate the CRD manifests in theconfig/crd/bases/cache.example.com_memcacheds.yaml
file.
About OpenAPI validation
OpenAPIv3 schemas are added to CRD manifests in the spec.validation
block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated.
Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation
prefix.
Additional resources
For more details on the usage of markers in API code, see the following Kubebuilder documentation:
For more details about OpenAPIv3 validation schemas in CRDs, see the Kubernetes documentation.
Implementing the controller
After creating a new API and controller, you can implement the controller logic.
Procedure
For this example, replace the generated controller file
controllers/memcached_controller.go
with following example implementation:Example
memcached_controller.go
/*
Copyright 2020.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"reflect"
"context"
"github.com/go-logr/logr"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
ctrllog "sigs.k8s.io/controller-runtime/pkg/log"
cachev1 "github.com/example-inc/memcached-operator/api/v1"
)
// MemcachedReconciler reconciles a Memcached object
type MemcachedReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the Memcached object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.7.0/pkg/reconcile
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
//log := r.Log.WithValues("memcached", req.NamespacedName)
log := ctrllog.FromContext(ctx)
// Fetch the Memcached instance
memcached := &cachev1.Memcached{}
err := r.Get(ctx, req.NamespacedName, memcached)
if err != nil {
if errors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
// Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
// Return and don't requeue
log.Info("Memcached resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
log.Error(err, "Failed to get Memcached")
return ctrl.Result{}, err
}
// Check if the deployment already exists, if not create a new one
found := &appsv1.Deployment{}
err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
if err != nil && errors.IsNotFound(err) {
// Define a new deployment
dep := r.deploymentForMemcached(memcached)
log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
err = r.Create(ctx, dep)
if err != nil {
log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
return ctrl.Result{}, err
}
// Deployment created successfully - return and requeue
return ctrl.Result{Requeue: true}, nil
} else if err != nil {
log.Error(err, "Failed to get Deployment")
return ctrl.Result{}, err
}
// Ensure the deployment size is the same as the spec
size := memcached.Spec.Size
if *found.Spec.Replicas != size {
found.Spec.Replicas = &size
err = r.Update(ctx, found)
if err != nil {
log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
return ctrl.Result{}, err
}
// Spec updated - return and requeue
return ctrl.Result{Requeue: true}, nil
}
// Update the Memcached status with the pod names
// List the pods for this memcached's deployment
podList := &corev1.PodList{}
listOpts := []client.ListOption{
client.InNamespace(memcached.Namespace),
client.MatchingLabels(labelsForMemcached(memcached.Name)),
}
if err = r.List(ctx, podList, listOpts...); err != nil {
log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name)
return ctrl.Result{}, err
}
podNames := getPodNames(podList.Items)
// Update status.Nodes if needed
if !reflect.DeepEqual(podNames, memcached.Status.Nodes) {
memcached.Status.Nodes = podNames
err := r.Status().Update(ctx, memcached)
if err != nil {
log.Error(err, "Failed to update Memcached status")
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
// deploymentForMemcached returns a memcached Deployment object
func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment {
ls := labelsForMemcached(m.Name)
replicas := m.Spec.Size
dep := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Image: "memcached:1.4.36-alpine",
Name: "memcached",
Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
Ports: []corev1.ContainerPort{{
ContainerPort: 11211,
Name: "memcached",
}},
}},
},
},
},
}
// Set Memcached instance as the owner and controller
ctrl.SetControllerReference(m, dep, r.Scheme)
return dep
}
// labelsForMemcached returns the labels for selecting the resources
// belonging to the given memcached CR name.
func labelsForMemcached(name string) map[string]string {
return map[string]string{"app": "memcached", "memcached_cr": name}
}
// getPodNames returns the pod names of the array of pods passed in
func getPodNames(pods []corev1.Pod) []string {
var podNames []string
for _, pod := range pods {
podNames = append(podNames, pod.Name)
}
return podNames
}
// SetupWithManager sets up the controller with the Manager.
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Memcached{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}
The example controller runs the following reconciliation logic for each
Memcached
custom resource (CR):Create a Memcached deployment if it does not exist.
Ensure that the deployment size is the same as specified by the
Memcached
CR spec.Update the
Memcached
CR status with the names of thememcached
pods.
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
Resources watched by the controller
The SetupWithManager()
function in controllers/memcached_controller.go
specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller.
import (
...
appsv1 "k8s.io/api/apps/v1"
...
)
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Memcached{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}
NewControllerManagedBy()
provides a controller builder that allows various controller configurations.
For(&cachev1.Memcached{})
specifies the Memcached
type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached
type, the reconcile loop is sent a reconcile Request
argument, which consists of a namespace and name key, for that Memcached
object.
Owns(&appsv1.Deployment{})
specifies the Deployment
type as the secondary resource to watch. For each Deployment
type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached
object for which the deployment was created.
Controller configurations
You can initialize a controller by using many other useful configurations. For example:
Set the maximum number of concurrent reconciles for the controller by using the
MaxConcurrentReconciles
option, which defaults to1
:func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Memcached{}).
Owns(&appsv1.Deployment{}).
WithOptions(controller.Options{
MaxConcurrentReconciles: 2,
}).
Complete(r)
}
Filter watch events using predicates.
Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the
EnqueueRequestsFromMapFunc
handler to transform a watch event into an arbitrary set of reconcile requests.
For more details on these and other configurations, see the upstream Builder and Controller GoDocs.
Reconcile loop
Every controller has a reconciler object with a Reconcile()
method that implements the reconcile loop. The reconcile loop is passed the Request
argument, which is a namespace and name key used to find the primary resource object, Memcached
, from the cache:
import (
ctrl "sigs.k8s.io/controller-runtime"
cachev1 "github.com/example-inc/memcached-operator/api/v1"
...
)
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// Lookup the Memcached instance for this reconcile request
memcached := &cachev1.Memcached{}
err := r.Get(ctx, req.NamespacedName, memcached)
...
}
Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:
// Reconcile successful - don't requeue
return ctrl.Result{}, nil
// Reconcile failed due to error - requeue
return ctrl.Result{}, err
// Requeue for any reason other than an error
return ctrl.Result{Requeue: true}, nil
You can set the Result.RequeueAfter
to requeue the request after a grace period as well:
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
You can return |
For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.
Permissions and RBAC manifests
The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
...
}
The ClusterRole
object manifest at config/rbac/role.yaml
is generated from the previous markers by using the controller-gen
utility whenever the make manifests
command is run.
Enabling proxy support
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
This tutorial uses |
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Edit the
controllers/memcached_controller.go
file to include the following:Import the
proxy
package from the operator-lib library:import (
...
"github.com/operator-framework/operator-lib/proxy"
)
Add the
proxy.ReadProxyVarsFromEnv
helper function to the reconcile loop and append the results to the Operand environments:for i, container := range dep.Spec.Template.Spec.Containers {
dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...)
}
...
Set the environment variable on the Operator deployment by adding the following to the
config/manager/manager.yaml
file:containers:
- args:
- --leader-elect
- --leader-election-id=ansible-proxy-demo
image: controller:latest
name: manager
env:
- name: "HTTP_PROXY"
value: "http_proxy_test"
Running the Operator
There are three ways you can use the Operator SDK CLI to build and run your Operator:
Run locally outside the cluster as a Go program.
Run as a deployment on the cluster.
Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Before running your Go-based Operator as either a deployment on OKD or as a bundle that uses OLM, ensure that your project has been updated to use supported images. |
Running locally outside the cluster
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/config
file and run the Operator locally:$ make install run
Example output
...
2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"}
2021-01-10T21:09:29.017-0700 INFO setup starting manager
2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "source": "kind source: /, Kind="}
2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {"reconciler group": "cache.example.com", "reconciler kind": "Memcached"}
2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "worker count": 1}
Running as a deployment on the cluster
You can run your Operator project as a deployment on your cluster.
Prerequisites
- Prepared your Go-based Operator to run on OKD by updating the project to use supported images
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
The Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
The name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-system
and is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac
.Run the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-system
Example output
NAME READY UP-TO-DATE AVAILABLE AGE
<project_name>-controller-manager 1/1 1 1 8m
Bundling an Operator and deploying with Operator Lifecycle Manager
Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
Operator SDK CLI installed on a development workstation
OpenShift CLI (
oc
) v4+ installedOperator project initialized by using the Operator SDK
If your Operator is Go-based, your project must be updated to use supported images for running on OKD
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
The Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
objectA bundle metadata directory named
bundle/metadata
All custom resource definitions (CRDs) in a
config/crd
directoryA Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OKD and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
Operator SDK CLI installed on a development workstation
Operator bundle image built and pushed to a registry
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OKD 4)Logged in to the cluster with
oc
using an account withcluster-admin
permissionsIf your Operator is Go-based, your project must be updated to use supported images for running on OKD
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \(1)
-n <namespace> \(2)
<registry>/<user>/<bundle_image_name>:<tag> (3)
1 The run bundle
command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.2 Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation.3 If you do not specify an image, the command uses quay.io/operator-framework/opm:latest
as the default index image. If you specify an image, the command uses the bundle image itself as the index image.As of OKD 4.11, the
run bundle
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required resources, including RBAC.
Creating a custom resource
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
- Example Memcached Operator, which provides the
Memcached
CR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploy
command:$ oc project memcached-operator-system
Edit the sample
Memcached
CR manifest atconfig/samples/cache_v1_memcached.yaml
to contain the following specification:apiVersion: cache.example.com/v1
kind: Memcached
metadata:
name: memcached-sample
...
spec:
...
size: 3
Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml
Ensure that the
Memcached
Operator creates the deployment for the sample CR with the correct size:$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE
memcached-operator-controller-manager 1/1 1 1 8m
memcached-sample 3/3 3 3 1m
Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE
memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m
memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m
memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
Check the CR status:
$ oc get memcached/memcached-sample -o yaml
Example output
apiVersion: cache.example.com/v1
kind: Memcached
metadata:
...
name: memcached-sample
...
spec:
size: 3
status:
nodes:
- memcached-sample-6fd7c98d8-7dqdr
- memcached-sample-6fd7c98d8-g5k7v
- memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
config/samples/cache_v1_memcached.yaml
file to change thespec.size
field in theMemcached
CR from3
to5
:$ oc patch memcached memcached-sample \
-p '{"spec":{"size": 5}}' \
--type=merge
Confirm that the Operator changes the deployment size:
$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE
memcached-operator-controller-manager 1/1 1 1 10m
memcached-sample 5/5 5 5 3m
Delete the CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yaml
Clean up the resources that have been created as part of this tutorial.
If you used the
make deploy
command to test the Operator, run the following command:$ make undeploy
If you used the
operator-sdk run bundle
command to test the Operator, run the following command:$ operator-sdk cleanup <project_name>
Additional resources
See Project layout for Go-based Operators to learn about the directory structures created by the Operator SDK.
If a cluster-wide egress proxy is configured, cluster administrators can override the proxy settings or inject a custom CA certificate for specific Operators running on Operator Lifecycle Manager (OLM).