Installing the Operator Framework (Technology Preview)
Red Hat has announced the Operator Framework, an open source toolkit designed to manage Kubernetes native applications, called Operators, in a more effective, automated, and scalable way.
The following sections provide instructions for trying out the Technology Preview Operator Framework in OKD 3.11 as a cluster administrator.
What’s in the Technology Preview?
The Technology Preview Operator Framework installs the Operator Lifecycle Manager (OLM), which aids cluster administrators in installing, upgrading, and granting access to Operators running on their OKD cluster.
The OKD web console is also updated with new management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.
For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it.
Figure 1. Operator Catalog Sources
In the screenshot, you can see the pre-loaded catalog sources of partner Operators from leading software vendors:
Couchbase Operator
Couchbase offers a NoSQL database that provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Available on OKD 3.11 as a developer preview, supported by Couchbase, the Operator allows you to run Couchbase deployments natively on OKD. It installs and can more effectively failover your NoSQL clusters.
Dynatrace Operator
Dynatrace application monitoring provides performance metrics in real time and can help detect and diagnose problems automatically. The Operator will more easily install the container-focused monitoring stack and connect it back to the Dynatrace monitoring cloud, watching custom resources and monitoring desired states constantly.
MongoDB Operator
MongoDB is a distributed, transactional database that stores data in flexible, JSON-like documents. The Operator supports deploying both production-ready replica sets and sharded clusters, and standalone dev/test instances. It works in conjunction with MongoDB Ops Manager, ensuring all clusters are deployed according to operational best practices.
Also included are the following Red Hat-provided Operators:
Red Hat AMQ Streams Operator
Red Hat AMQ Streams is a massively scalable, distributed, and high performance data streaming platform based on the Apache Kafka project. It offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput and extremely low latency.
etcd Operator
etcd is a distributed key-value store that provides a reliable way to store data across a cluster of machines. This Operator enables users to configure and manage the complexities of etcd using a simple declarative configuration that creates, configures, and manages etcd clusters.
Prometheus Operator
Prometheus is a cloud native monitoring system co-hosted with Kubernetes within the CNCF. This Operator includes application domain knowledge to take care of common tasks like create/destroy, simple configuration, automatic generating of monitoring target configurations via labels, and more.
Installing Operator Lifecycle Manager using Ansible
To install the Technology Preview Operator Framework, you can use the included playbook with the OKD openshift-ansible
installer after installing your cluster.
Alternatively, the Technology Preview Operator Framework can be installed during initial cluster installation. See Configuring Your Inventory File for separate instructions. |
Prerequisites
An existing OKD 3.11 cluster
Access to the cluster using an account with
cluster-admin
permissionsAnsible playbooks provided by the latest
openshift-ansible
installer
Procedure
Change to the playbook directory and run the registry authorization playbook using your inventory file to authorize your nodes using your credentials from the previous step:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook -i <inventory_file> \
playbooks/updates/registry_auth.yml
Change to the playbook directory and run the OLM installation playbook using your inventory file:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook -i <inventory_file> \
playbooks/olm/config.yml
Navigate to the cluster’s web console using a browser. A new section should now be available in the navigation on the left side of the page:
Figure 2. New Operators navigation section
This is where you can install Operators, grant projects access to them, and then launch instances for all of your environments.
Launching your first Operator
This section walks through creating a new Couchbase cluster using the Couchbase Operator.
Prerequisites
OKD 3.11 with Technology Preview OLM enabled
Access to the cluster using an account with
cluster-admin
permissionsCouchbase Operator loaded to the Operator catalog (loaded by default with Technology Preview OLM)
Procedure
As a cluster administrator (a user with the
cluster-admin
role), create a new project in the OKD web console for this procedure. This example uses a project called couchbase-test.Installing an Operator within a project is done through a Subscription object, which the cluster administrator can create and manage across the entire cluster. To view the available Subscriptions, navigate to the Cluster Console from the drop-down menu, then to the Operators → Catalog Sources screen in the left navigation.
If you want to enable additional users to view, create, and manage Subscriptions in a project, they must have the
admin
andview
roles for that project, as well as theview
role for theoperator-lifecycle-manager
project. Cluster administrators can add these roles using the following commands:$ oc policy add-role-to-user admin <user> -n <target_project>
$ oc policy add-role-to-user view <user> -n <target_project>
$ oc policy add-role-to-user view <user> -n operator-lifecycle-manager
This experience will be simplified in future releases of the OLM.
Subscribe the desired project to the Couchbase catalog source from either the web console or CLI.
Choose one of the following methods:
For the web console method, ensure you are viewing the desired project, then click Create Subscription on an Operator from this screen to install it to the project.
For the CLI method, create a YAML file using the following definition:
couchbase-subscription.yaml file
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
generateName: couchbase-enterprise-
namespace: couchbase-test (1)
spec:
source: certified-operators
name: couchbase-enterprise
startingCSV: couchbase-operator.v1.0.0
channel: preview
1 Ensure the namespace
field in themetadata
section is set to the desired project.Then, create the Subscription using the CLI:
$ oc create -f couchbase-subscription.yaml
After the Subscription is created, the Operator then appears in the Cluster Service Versions screen, which is the catalog users can use to launch the software provided by the Operator. Click on the Couchbase Operator to view more details about this Operator’s features:
Figure 3. Couchbase Operator overview
Before creating the Couchbase cluster, create a secret with the following definition using the web console or CLI that holds credentials for the super user account. The Operator reads this upon start up and configures the database with these details:
Couchbase secret
apiVersion: v1
kind: Secret
metadata:
name: couchbase-admin-creds
namespace: couchbase-test (1)
type: Opaque
stringData:
username: admin
password: password
1 Ensure the namespace
field in themetadata
section is set to the desired project.Choose one of the following methods:
For the web console method, click Workloads → Secrets from the left navigation, then click Create and choose Secret from YAML to enter the secret definition.
For the CLI method, save the secret definition to a YAML file (for example, couchbase-secret.yaml) and use the CLI to create it in the desired project:
$ oc create -f couchbase-secret.yaml
Create the new Couchbase cluster.
All users with the
edit
role in a given project can create, manage, and delete application instances (a Couchbase cluster, in this example) managed by Operators that have already been installed in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, cluster administrators can add the role using the following command:$ oc policy add-role-to-user edit <user> -n <target_project>
From the Cluster Service Versions section of the web console, click Create Couchbase Operator from the Operator’s Overview screen to begin creating a new
CouchbaseCluster
object. This object is a new type that the Operator has made available in the cluster. The object works similar to the built-inDeployment
orReplicaSet
objects, but contains logic specific to managing Couchbase.When clicking the Create Couchbase Operator button, you may receive a 404 error the first time. This is a known issue; as a workaround, refresh this page to continue. (BZ#1609731)
The web console contains a minimal starting template, but you can read the Couchbase documentation for all of the features the Operator supports.
Figure 4. Creating a Couchbase cluster
Ensure that you configure the name of the secret that contains the
admin
credentials:apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
name: cb-example
namespace: couchbase-test
spec:
authSecret: couchbase-admin-creds
baseImage: registry.connect.redhat.com/couchbase/server
[...]
When you have finalized your object definition, click Create in the web console (or use the CLI) to create your object. This triggers the Operator to start up the pods, services, and other components of the Couchbase cluster.
Your project now contains a number of resources created and configured automatically by the Operator:
Figure 5. Couchbase cluster details
Click the Resources tab to verify that a Kubernetes service has been created that allows you to access the database from other pods in your project.
Using the
cb-example
service, you can connect to the database using the credentials saved in the secret. Other application pods can mount and use this secret and communicate with the service.
You now have a fault-tolerant installation of Couchbase that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers can easily obtain this database cluster by supplying high-level configuration; it is not required to have deep knowledge of the nuances of Couchbase clustering or failover.
Read more about the capabilities of the Couchbase Autonomous Operator in the official Couchbase documentation.
Getting involved
The OpenShift team would love to hear about your experience using the Operator Framework and suggestions you have for services you would like to see offered as an Operator.
Get in touch with the team by emailing openshift-operators@redhat.com.