Operator SDK tutorial for Java-based Operators
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Operator developers can take advantage of Java programming language support in the Operator SDK to build an example Java-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
Operator SDK
The operator-sdk
CLI tool and java-operator-sdk
library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Java-based Operators. |
Prerequisites
Operator SDK CLI installed
OpenShift CLI (
oc
) 4+ installedJava 11+
Maven 3.6.3+
Logged into an OKD 4 cluster with
oc
with an account that hascluster-admin
permissionsTo allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
Creating a project
Use the Operator SDK CLI to create a project called memcached-operator
.
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operator
Change to the directory:
$ cd $HOME/projects/memcached-operator
Run the
operator-sdk init
command with thequarkus
plugin to initialize the project:$ operator-sdk init \
--plugins=quarkus \
--domain=example.com \
--project-name=memcached-operator
PROJECT file
Among the files generated by the operator-sdk init
command is a Kubebuilder PROJECT
file. Subsequent operator-sdk
commands, as well as help
output, that are run from the project root read this file and are aware that the project type is Java. For example:
domain: example.com
layout:
- quarkus.javaoperatorsdk.io/v1-alpha
projectName: memcached-operator
version: "3"
Creating an API and controller
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API:
$ operator-sdk create api \
--plugins=quarkus \ (1)
--group=cache \ (2)
--version=v1 \ (3)
--kind=Memcached (4)
1 Set the plugin flag to quarkus
.2 Set the group flag to cache
.3 Set the version flag to v1
.4 Set the kind flag to Memcached
.
Verification
Run the
tree
command to view the file structure:$ tree
Example output
.
├── Makefile
├── PROJECT
├── pom.xml
└── src
└── main
├── java
│ └── com
│ └── example
│ ├── Memcached.java
│ ├── MemcachedReconciler.java
│ ├── MemcachedSpec.java
│ └── MemcachedStatus.java
└── resources
└── application.properties
6 directories, 8 files
Defining the API
Define the API for the Memcached
custom resource (CR).
Procedure
Edit the following files that were generated as part of the
create api
process:Update the following attributes in the
MemcachedSpec.java
file to define the desired state of theMemcached
CR:public class MemcachedSpec {
private Integer size;
public Integer getSize() {
return size;
}
public void setSize(Integer size) {
this.size = size;
}
}
Update the following attributes in the
MemcachedStatus.java
file to define the observed state of theMemcached
CR:The example below illustrates a Node status field. It is recommended that you use typical status properties in practice.
import java.util.ArrayList;
import java.util.List;
public class MemcachedStatus {
// Add Status information here
// Nodes are the names of the memcached pods
private List<String> nodes;
public List<String> getNodes() {
if (nodes == null) {
nodes = new ArrayList<>();
}
return nodes;
}
public void setNodes(List<String> nodes) {
this.nodes = nodes;
}
}
Update the
Memcached.java
file to define the Schema for Memcached APIs that extends to bothMemcachedSpec.java
andMemcachedStatus.java
files.@Version("v1")
@Group("cache.example.com")
public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}
Generating CRD manifests
After the API is defined with MemcachedSpec
and MemcachedStatus
files, you can generate CRD manifests.
Procedure
Run the following command from the
memcached-operator
directory to generate the CRD:$ mvn clean install
Verification
Verify the contents of the CRD in the
target/kubernetes/memcacheds.cache.example.com-v1.yml
file as shown in the following example:$ cat target/kubernetes/memcacheds.cache.example.com-v1.yaml
Example output
# Generated by Fabric8 CRDGenerator, manual edits might get overwritten!
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: memcacheds.cache.example.com
spec:
group: cache.example.com
names:
kind: Memcached
plural: memcacheds
singular: memcached
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
spec:
properties:
size:
type: integer
type: object
status:
properties:
nodes:
items:
type: string
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}
Creating a Custom Resource
After generating the CRD manifests, you can create the Custom Resource (CR).
Procedure
Create a Memcached CR called
memcached-sample.yaml
:apiVersion: cache.example.com/v1
kind: Memcached
metadata:
name: memcached-sample
spec:
# Add spec fields here
size: 1
Implementing the controller
After creating a new API and controller, you can implement the controller logic.
Procedure
Append the following dependency to the
pom.xml
file:<dependency>
<groupId>commons-collections</groupId>
<artifactId>commons-collections</artifactId>
<version>3.2.2</version>
</dependency>
For this example, replace the generated controller file
MemcachedReconciler.java
with following example implementation:Example
MemcachedReconciler.java
``` package com.example;
import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors;
public class MemcachedReconciler implements Reconciler
{ private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) {
this.client = client;
}
// TODO Fill in the rest of the reconciler
@Override public UpdateControl
reconcile( Memcached resource, Context context) {
// TODO: fill in logic
Deployment deployment = client.apps()
.deployments()
.inNamespace(resource.getMetadata().getNamespace())
.withName(resource.getMetadata().getName())
.get();
if (deployment == null) {
Deployment newDeployment = createMemcachedDeployment(resource);
client.apps().deployments().create(newDeployment);
return UpdateControl.noUpdate();
}
int currentReplicas = deployment.getSpec().getReplicas();
int requiredReplicas = resource.getSpec().getSize();
if (currentReplicas != requiredReplicas) {
deployment.getSpec().setReplicas(requiredReplicas);
client.apps().deployments().createOrReplace(deployment);
return UpdateControl.noUpdate();
}
List<Pod> pods = client.pods()
.inNamespace(resource.getMetadata().getNamespace())
.withLabels(labelsForMemcached(resource))
.list()
.getItems();
List<String> podNames =
pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());
if (resource.getStatus() == null
|| !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) {
if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus());
resource.getStatus().setNodes(podNames);
return UpdateControl.updateResource(resource);
}
return UpdateControl.noUpdate();
}
private Map<String, String> labelsForMemcached(Memcached m) {
Map<String, String> labels = new HashMap<>();
labels.put("app", "memcached");
labels.put("memcached_cr", m.getMetadata().getName());
return labels;
}
private Deployment createMemcachedDeployment(Memcached m) {
Deployment deployment = new DeploymentBuilder()
.withMetadata(
new ObjectMetaBuilder()
.withName(m.getMetadata().getName())
.withNamespace(m.getMetadata().getNamespace())
.build())
.withSpec(
new DeploymentSpecBuilder()
.withReplicas(m.getSpec().getSize())
.withSelector(
new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build())
.withTemplate(
new PodTemplateSpecBuilder()
.withMetadata(
new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build())
.withSpec(
new PodSpecBuilder()
.withContainers(
new ContainerBuilder()
.withImage("memcached:1.4.36-alpine")
.withName("memcached")
.withCommand("memcached", "-m=64", "-o", "modern", "-v")
.withPorts(
new ContainerPortBuilder()
.withContainerPort(11211)
.withName("memcached")
.build())
.build())
.build())
.build())
.build())
.build();
deployment.addOwnerReference(m);
return deployment;
}
}
```
The example controller runs the following reconciliation logic for each `Memcached` custom resource (CR):
- Creates a Memcached deployment if it does not exist.
- Ensures that the deployment size matches the size specified by the `Memcached` CR spec.
- Updates the `Memcached` CR status with the names of the `memcached` pods.
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
Reconcile loop
Every controller has a reconciler object with a
Reconcile()
method that implements the reconcile loop. The reconcile loop is passed theDeployment
argument, as shown in the following example:Deployment deployment = client.apps()
.deployments()
.inNamespace(resource.getMetadata().getNamespace())
.withName(resource.getMetadata().getName())
.get();
As shown in the following example, if the
Deployment
isnull
, the deployment needs to be created. After you create theDeployment
, you can determine if reconciliation is necessary. If there is no need of reconciliation, return the value ofUpdateControl.noUpdate()
, otherwise, return the value of `UpdateControl.updateStatus(resource):if (deployment == null) {
Deployment newDeployment = createMemcachedDeployment(resource);
client.apps().deployments().create(newDeployment);
return UpdateControl.noUpdate();
}
After getting the
Deployment
, get the current and required replicas, as shown in the following example:int currentReplicas = deployment.getSpec().getReplicas();
int requiredReplicas = resource.getSpec().getSize();
If
currentReplicas
does not match therequiredReplicas
, you must update theDeployment
, as shown in the following example:if (currentReplicas != requiredReplicas) {
deployment.getSpec().setReplicas(requiredReplicas);
client.apps().deployments().createOrReplace(deployment);
return UpdateControl.noUpdate();
}
The following example shows how to obtain the list of pods and their names:
List<Pod> pods = client.pods()
.inNamespace(resource.getMetadata().getNamespace())
.withLabels(labelsForMemcached(resource))
.list()
.getItems();
List<String> podNames =
pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());
Check if resources were created and verify podnames with the Memcached resources. If a mismatch exists in either of these conditions, perform a reconciliation as shown in the following example:
if (resource.getStatus() == null
|| !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) {
if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus());
resource.getStatus().setNodes(podNames);
return UpdateControl.updateResource(resource);
}
Defining labelsForMemcached
labelsForMemcached
is a utility to return a map of the labels to attach to the resources:
private Map<String, String> labelsForMemcached(Memcached m) {
Map<String, String> labels = new HashMap<>();
labels.put("app", "memcached");
labels.put("memcached_cr", m.getMetadata().getName());
return labels;
}
Define the createMemcachedDeployment
The createMemcachedDeployment
method uses the fabric8 DeploymentBuilder
class:
private Deployment createMemcachedDeployment(Memcached m) {
Deployment deployment = new DeploymentBuilder()
.withMetadata(
new ObjectMetaBuilder()
.withName(m.getMetadata().getName())
.withNamespace(m.getMetadata().getNamespace())
.build())
.withSpec(
new DeploymentSpecBuilder()
.withReplicas(m.getSpec().getSize())
.withSelector(
new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build())
.withTemplate(
new PodTemplateSpecBuilder()
.withMetadata(
new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build())
.withSpec(
new PodSpecBuilder()
.withContainers(
new ContainerBuilder()
.withImage("memcached:1.4.36-alpine")
.withName("memcached")
.withCommand("memcached", "-m=64", "-o", "modern", "-v")
.withPorts(
new ContainerPortBuilder()
.withContainerPort(11211)
.withName("memcached")
.build())
.build())
.build())
.build())
.build())
.build();
deployment.addOwnerReference(m);
return deployment;
}
Running the Operator
There are three ways you can use the Operator SDK CLI to build and run your Operator:
Run locally outside the cluster as a Go program.
Run as a deployment on the cluster.
Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Running locally outside the cluster
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to compile the Operator:
$ mvn clean install
Example output
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.193 s
[INFO] Finished at: 2021-05-26T12:16:54-04:00
[INFO] ------------------------------------------------------------------------
Run the following command to install the CRD to the default namespace:
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml
Example output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
Create a file called
rbac.yaml
as shown in the following example:apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: memcached-operator-admin
subjects:
- kind: ServiceAccount
name: memcached-quarkus-operator-operator
namespace: <operator_namespace>
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
Run the following command to grant
cluster-admin
privileges to thememcached-quarkus-operator-operator
by applying therbac.yaml
file:$ oc apply -f rbac.yaml
Enter the following command to run the Operator:
$ java -jar target/quarkus-app/quarkus-run.jar
The
java
command will run the Operator and remain running until you end the process. You will need another terminal to complete the rest of these commands.Apply the
memcached-sample.yaml
file with the following command:$ kubectl apply -f memcached-sample.yaml
Example output
memcached.cache.example.com/memcached-sample created
Verification
Run the following command to confirm that the pod has started:
$ oc get all
Example output
NAME READY STATUS RESTARTS AGE
pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
Running as a deployment on the cluster
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
The Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
The name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to install the CRD to the default namespace:
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml
Example output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
Create a file called
rbac.yaml
as shown in the following example:apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: memcached-operator-admin
subjects:
- kind: ServiceAccount
name: memcached-quarkus-operator-operator
namespace: <operator_namespace>
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
The
rbac.yaml
file will be applied at a later step.Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Run the following command to grant
cluster-admin
privileges to thememcached-quarkus-operator-operator
by applying therbac.yaml
file created in a previous step:$ oc apply -f rbac.yaml
Run the following command to verify that the Operator is running:
$ oc get all -n default
Example output
NAME READY UP-TO-DATE AVAILABLE AGE
pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s
Run the following command to apply the
memcached-sample.yaml
and create thememcached-sample
pod:$ oc apply -f memcached-sample.yaml
Example output
memcached.cache.example.com/memcached-sample created
Verification
Run the following command to confirm the pods have started:
$ oc get all
Example output
NAME READY STATUS RESTARTS AGE
pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s
pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
Bundling an Operator and deploying with Operator Lifecycle Manager
Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
Operator SDK CLI installed on a development workstation
OpenShift CLI (
oc
) v4+ installedOperator project initialized by using the Operator SDK
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
The Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
objectA bundle metadata directory named
bundle/metadata
All custom resource definitions (CRDs) in a
config/crd
directoryA Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OKD and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
Operator SDK CLI installed on a development workstation
Operator bundle image built and pushed to a registry
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OKD 4)Logged in to the cluster with
oc
using an account withcluster-admin
permissions
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \(1)
-n <namespace> \(2)
<registry>/<user>/<bundle_image_name>:<tag> (3)
1 The run bundle
command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.2 Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation.3 If you do not specify an image, the command uses quay.io/operator-framework/opm:latest
as the default index image. If you specify an image, the command uses the bundle image itself as the index image.As of OKD 4.11, the
run bundle
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required resources, including RBAC.
Additional resources
See Project layout for Java-based Operators to learn about the directory structures created by the Operator SDK.
If a cluster-wide egress proxy is configured, cluster administrators can override the proxy settings or inject a custom CA certificate for specific Operators running on Operator Lifecycle Manager (OLM).