Installing the Network Observability Operator
Installing Loki is a prerequisite for using the Network Observability Operator. It is recommended to install Loki using the Loki Operator; therefore, these steps are documented below prior to the Network Observability Operator installation.
The Loki Operator integrates a gateway that implements multi-tenancy & authentication with Loki for data flow storage. The LokiStack resource manages Loki, which is a scalable, highly-available, multitenant log aggregation system, and a web proxy with OKD authentication. The LokiStack proxy uses OKD authentication to enforce multi-tenancy and facilitate the saving and indexing of data in Loki log stores.
The Loki Operator can also be used for Logging with the LokiStack. The Network Observability Operator requires a dedicated LokiStack separate from Logging. |
Installing the Loki Operator
It is recommended to install Loki using the Loki Operator version 5.6, This version provides the ability to create a LokiStack instance using the openshift-network
tennant configuration mode. It also provides fully automatic, in-cluster authentication and authorization support for Network Observability.
Prerequisites
Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
OKD 4.10+.
Linux Kernel 4.18+.
There are several ways you can install Loki. One way you can install the Loki Operator is by using the OKD web console Operator Hub.
Procedure
Install the
Loki Operator
Operator:In the OKD web console, click Operators → OperatorHub.
Choose Loki Operator from the list of available Operators, and click Install.
Under Installation Mode, select All namespaces on the cluster.
Verify that you installed the Loki Operator. Visit the Operators → Installed Operators page and look for Loki Operator.
Verify that Loki Operator is listed with Status as Succeeded in all the projects.
Create a
Secret
YAML file. You can create this secret in the web console or CLI.Using the web console, navigate to the Project → All Projects dropdown and select Create Project. Name the project
netobserv
and click Create.Navigate to the Import icon ,+, in the top right corner. Drop your YAML file into the editor. It is important to create this YAML file in the
netobserv
namespace that uses theaccess_key_id
andaccess_key_secret
to specify your credentials.Once you create the secret, you should see it listed under Workloads → Secrets in the web console.
The following shows an example secret YAML file:
apiVersion: v1
kind: Secret
metadata:
name: loki-s3
namespace: netobserv
stringData:
access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK
access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
To uninstall Loki, refer to the uninstallation process that corresponds with the method you used to install Loki. You might have remaining |
Create a LokiStack custom resource
It is recommended to deploy the LokiStack in the same namespace referenced by the FlowCollector specification, spec.namespace
. You can use the web console or CLI to create a namespace, or new project.
Procedure
Navigate to Operators → Installed Operators.
In the Details, under Provided APIs, select
LokiStack
and click Create LokiStack.Ensure the following fields are specified in either Form View or YAML view:
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: loki
namespace: netobserv
spec:
size: 1x.small
storage:
schemas:
- version: v12
effectiveDate: '2022-06-01'
secret:
name: loki-s3
type: s3
storageClassName: gp3 (1)
tenants:
mode: openshift-network
1 Use a storage class name that is available on the cluster for ReadWriteOnce
access mode. You can useoc get storageclasses
to see what is available on your cluster.You must not reuse the same LokiStack that is used for cluster logging.
Deployment Sizing
Sizing for Loki follows the format of N<x>._<size>_
where the value <N>
is the number of instances and <size>
specifies performance capabilities.
1x.extra-small is for demo purposes only, and is not supported. |
1x.extra-small | 1x.small | 1x.medium | |
---|---|---|---|
Data transfer | Demo use only. | 500GB/day | 2TB/day |
Queries per second (QPS) | Demo use only. | 25-50 QPS at 200ms | 25-75 QPS at 200ms |
Replication factor | None | 2 | 3 |
Total CPU requests | 5 vCPUs | 36 vCPUs | 54 vCPUs |
Total Memory requests | 7.5Gi | 63Gi | 139Gi |
Total Disk requests | 150Gi | 300Gi | 450Gi |
LokiStack ingestion limits and health alerts
The LokiStack instance comes with default settings according to the configured size. It is possible to override some of these settings, such as the ingestion and query limits. You might want to update them if you get Loki errors showing up in the Console plugin, or in flowlogs-pipeline
logs. An automatic alert in the web console notifies you when these limits are reached.
Here is an example of configured limits:
spec:
limits:
global:
ingestion:
ingestionBurstSize: 40
ingestionRate: 20
maxGlobalStreamsPerTenant: 25000
queries:
maxChunksPerQuery: 2000000
maxEntriesLimitPerQuery: 10000
maxQuerySeries: 3000
For more information about these settings, see the LokiStack API reference.
Create roles for authentication and authorization
Specify authentication and authorization configurations by defining ClusterRole
and ClusterRoleBinding
. You can create a YAML file to define these roles.
Procedure
Using the web console, click the Import icon, +.
Drop your YAML file into the editor and click Create:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: loki-netobserv-tenant
rules:
- apiGroups:
- 'loki.grafana.com'
resources:
- network
resourceNames:
- logs
verbs:
- 'get'
- 'create'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: loki-netobserv-tenant
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: loki-netobserv-tenant
subjects:
- kind: ServiceAccount
name: flowlogs-pipeline (1)
namespace: netobserv
1 The flowlogs-pipeline
writes to Loki. If you are using Kafka, this value isflowlogs-pipeline-transformer
.
Installing Kafka (optional)
The Kafka Operator is supported for large scale environments. You can install the Kafka Operator as Red Hat AMQ Streams from the Operator Hub, just as the Loki Operator and Network Observability Operator were installed.
To uninstall Kafka, refer to the uninstallation process that corresponds with the method you used to install. |
Installing the Network Observability Operator
You can install the Network Observability Operator using the OKD web console Operator Hub. When you install the Operator, it provides the FlowCollector
custom resource definition (CRD). You can set specifications in the web console when you create the FlowCollector
.
Prerequisites
- Installed Loki. It is recommended to install Loki using the Loki Operator version 5.6.
This documentation assumes that your |
Procedure
In the OKD web console, click Operators → OperatorHub.
Choose Network Observability Operator from the list of available Operators in the OperatorHub, and click Install.
Select the checkbox
Enable Operator recommended cluster monitoring on this Namespace
.Navigate to Operators → Installed Operators. Under Provided APIs for Network Observability, select the Flow Collector link.
Navigate to the Flow Collector tab, and click Create FlowCollector. Make the following selections in the form view:
spec.agent.ebpf.Sampling : Specify a sampling size for flows. Lower sampling sizes will have higher impact on resource utilization. For more information, see the
FlowCollector
API reference, under spec.agent.ebpf.spec.deploymentModel: If you are using Kafka, verify Kafka is selected.
spec.exporters: If you are using Kafka, you can optionally send network flows to Kafka, so that they can be consumed by any processor or storage that supports Kafka input, such as Splunk, Elasticsearch, or Fluentd. To do this, set the following specifications:
Set the type to
KAFKA
.Set the address as
kafka-cluster-kafka-bootstrap.netobserv
.Set the topic as
netobserv-flows-export
. The Operator exports all flows to the configured Kafka topic.Set the following tls specifications:
certFile:
service-ca.crt
, name:kafka-gateway-ca-bundle
, and type:configmap
.You can also configure this option at a later time by directly editing the YAML. For more information, see Export enriched network flow data.
- **loki.url**: Since authentication is specified separately, this URL needs to be updated to `[https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network](https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network)`. The first part of the URL, "loki", should match the name of your LokiStack.
- **loki.statusUrl**: Set this to `[https://loki-query-frontend-http.netobserv.svc:3100/](https://loki-query-frontend-http.netobserv.svc:3100/)`. The first part of the URL, "loki", should match the name of your LokiStack.
- **loki.authToken**: Select the `FORWARD` value.
- **tls.enable**: Verify that the box is checked so it is enabled.
- **statusTls**: The `enable` value is false by default.
For the first part of the certificate reference names: `loki-gateway-ca-bundle`, `loki-ca-bundle`, and `loki-query-frontend-http`,`loki`, should match the name of your `LokiStack`.
2. Click **Create**.
Verification
To confirm this was successful, when you navigate to Observe you should see Network Traffic listed in the options.
In the absence of Application Traffic within the OKD cluster, default filters might show that there are “No results”, which results in no visual flow. Beside the filter selections, select Clear all filters to see the flow.
If you installed Loki using the Loki Operator, it is advised not to use |
Additional resources
For more information about Flow Collector specifications, see the Flow Collector API Reference and the Flow Collector sample resource.
For more information about exporting flow data to Kafka for third party processing consumption, see Export enriched network flow data.
Uninstalling the Network Observability Operator
You can uninstall the Network Observability Operator using the OKD web console Operator Hub, working in the Operators → Installed Operators area.
Procedure
Remove the
FlowCollector
custom resource.Click Flow Collector, which is next to the Network Observability Operator in the Provided APIs column.
Click the options menu for the cluster and select Delete FlowCollector.
Uninstall the Network Observability Operator.
Navigate back to the Operators → Installed Operators area.
Click the options menu next to the Network Observability Operator and select Uninstall Operator.
Home → Projects and select
openshift-netobserv-operator
Navigate to Actions and select Delete Project
Remove the
FlowCollector
custom resource definition (CRD).Navigate to Administration → CustomResourceDefinitions.
Look for FlowCollector and click the options menu .
Select Delete CustomResourceDefinition.
The Loki Operator and Kafka remain if they were installed and must be removed separately. Additionally, you might have remaining data stored in an object store, and a persistent volume that must be removed.