Network Observability Operator release notes
The Network Observability Operator enables administrators to observe and analyze network traffic flows for OKD clusters.
These release notes track the development of the Network Observability Operator in the OKD.
For an overview of the Network Observability Operator, see About Network Observability Operator.
Network Observability Operator 1.4.1
The following advisory is available for the Network Observability Operator 1.4.1:
CVEs
Bug fixes
In 1.4, there was a known issue when sending network flow data to Kafka. The Kafka message key was ignored, causing an error with connection tracking. Now the key is used for partitioning, so each flow from the same connection is sent to the same processor. (NETOBSERV-926)
In 1.4, the
Inner
flow direction was introduced to account for flows between pods running on the same node. Flows with theInner
direction were not taken into account in the generated Prometheus metrics derived from flows, resulting in under-evaluated bytes and packets rates. Now, derived metrics are including flows with theInner
direction, providing correct bytes and packets rates. (NETOBSERV-1344)
Network Observability Operator 1.4.0
The following advisory is available for the Network Observability Operator 1.4.0:
Channel removal
You must switch your channel from v1.0.x
to stable
to receive the latest Operator updates. The v1.0.x
channel is now removed.
New features and enhancements
Notable enhancements
The 1.4 release of the Network Observability Operator adds improvements and new capabilities to the OKD web console plugin and the Operator configuration.
Web console enhancements:
In the Query Options, the Duplicate flows checkbox is added to choose whether or not to show duplicated flows.
You can now filter source and destination traffic with One-way, Back-and-forth, and Swap filters.
The Network Observability metrics dashboards in Observe → Dashboards → NetObserv and NetObserv / Health are modified as follows:
The NetObserv dashboard shows top bytes, packets sent, packets received per nodes, namespaces, and workloads. Flow graphs are removed from this dashboard.
The NetObserv / Health dashboard shows flows overhead as well as top flow rates per nodes, namespaces, and workloads.
Infrastructure and Application metrics are shown in a split-view for namespaces and workloads.
For more information, see Network Observability metrics and Quick filters.
Configuration enhancements:
You now have the option to specify different namespaces for any configured ConfigMap or Secret reference, such as in certificates configuration.
The
spec.processor.clusterName
parameter is added so that the name of the cluster appears in the flows data. This is useful in a multi-cluster context. When using OKD, leave empty to make it automatically determined.
For more information, see Flow Collector sample resource and Flow Collector API Reference.
Network Observability without Loki
The Network Observability Operator is now functional and usable without Loki. If Loki is not installed, it can only export flows to KAFKA or IPFIX format and provide metrics in the Network Observability metrics dashboards. For more information, see Network Observability without Loki.
DNS tracking
In 1.4, the Network Observability Operator makes use of eBPF tracepoint hooks to enable DNS tracking. You can monitor your network, conduct security analysis, and troubleshoot DNS issues in the Network Traffic and Overview pages in the web console.
For more information, see Configuring DNS tracking and Working with DNS tracking.
SR-IOV support
You can now collect traffic from a cluster with Single Root I/O Virtualization (SR-IOV) device. For more information, see Configuring the monitoring of SR-IOV interface traffic.
IPFIX exporter support
You can now export eBPF-enriched network flows to the IPFIX collector. For more information, see Export enriched network flow data.
Packet drops
In the 1.4 release of the Network Observability Operator, eBPF tracepoint hooks are used to enable packet drop tracking. You can now detect and analyze the cause for packet drops and make decisions to optimize network performance. In OKD 4.14 and later, both host drops and OVS drops are detected. In OKD 4.13, only host drops are detected. For more information, see Configuring packet drop tracking and Working with packet drops.
s390x architecture support
Network Observability Operator can now run on s390x
architecture. Previously it ran on amd64
, ppc64le
, or arm64
.
Bug fixes
Previously, the Prometheus metrics exported by Network Observability were computed out of potentially duplicated network flows. In the related dashboards, from Observe → Dashboards, this could result in potentially doubled rates. Note that dashboards from the Network Traffic view were not affected. Now, network flows are filtered to eliminate duplicates prior to metrics calculation, which results in correct traffic rates displayed in the dashboards. (NETOBSERV-1131)
Previously, the Network Observability Operator agents were not able to capture traffic on network interfaces when configured with Multus or SR-IOV, non-default network namespaces. Now, all available network namespaces are recognized and used for capturing flows, allowing capturing traffic for SR-IOV. There are configurations needed for the
FlowCollector
andSRIOVnetwork
custom resource to collect traffic. (NETOBSERV-1283)Previously, in the Network Observability Operator details from Operators → Installed Operators, the
FlowCollector
Status field might have reported incorrect information about the state of the deployment. The status field now shows the proper conditions with improved messages. The history of events is kept, ordered by event date. (NETOBSERV-1224)Previously, during spikes of network traffic load, certain eBPF pods were OOM-killed and went into a
CrashLoopBackOff
state. Now, theeBPF
agent memory footprint is improved, so pods are not OOM-killed and entering aCrashLoopBackOff
state. (NETOBSERV-975)Previously when
processor.metrics.tls
was set toPROVIDED
theinsecureSkipVerify
option value was forced to betrue
. Now you can setinsecureSkipVerify
totrue
orfalse
, and provide a CA certificate if needed. (NETOBSERV-1087)
Known issues
Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the
flowlogs-pipeline
pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater. (NETOBSERV-980)Currently, when
spec.agent.ebpf.features
includes DNSTracking, larger DNS packets require theeBPF
agent to look for DNS header outside of the 1st socket buffer (SKB) segment. A neweBPF
agent helper function needs to be implemented to support it. Currently, there is no workaround for this issue. (NETOBSERV-1304)Currently, when
spec.agent.ebpf.features
includes DNSTracking, DNS over TCP packets requires theeBPF
agent to look for DNS header outside of the 1st SKB segment. A neweBPF
agent helper function needs to be implemented to support it. Currently, there is no workaround for this issue. (NETOBSERV-1245)Currently, when using a
KAFKA
deployment model, if conversation tracking is configured, conversation events might be duplicated across Kafka consumers, resulting in inconsistent tracking of conversations, and incorrect volumetric data. For that reason, it is not recommended to configure conversation tracking whendeploymentModel
is set toKAFKA
. (NETOBSERV-926)Currently, when the
processor.metrics.server.tls.type
is configured to use aPROVIDED
certificate, the operator enters an unsteady state that might affect its performance and resource consumption. It is recommended to not use aPROVIDED
certificate until this issue is resolved, and instead using an auto-generated certificate, settingprocessor.metrics.server.tls.type
toAUTO
. (NETOBSERV-1293)
Network Observability Operator 1.3.0
The following advisory is available for the Network Observability Operator 1.3.0:
Channel deprecation
You must switch your channel from v1.0.x
to stable
to receive future Operator updates. The v1.0.x
channel is deprecated and planned for removal in the next release.
New features and enhancements
Multi-tenancy in Network Observability
- System administrators can allow and restrict individual user access, or group access, to the flows stored in Loki. For more information, see Multi-tenancy in Network Observability.
Flow-based metrics dashboard
- This release adds a new dashboard, which provides an overview of the network flows in your OKD cluster. For more information, see Network Observability metrics.
Troubleshooting with the must-gather tool
- Information about the Network Observability Operator can now be included in the must-gather data for troubleshooting. For more information, see Network Observability must-gather.
Multiple architectures now supported
- Network Observability Operator can now run on an
amd64
,ppc64le
, orarm64
architectures. Previously, it only ran onamd64
.
Deprecated features
Deprecated configuration parameter setting
The release of Network Observability Operator 1.3 deprecates the spec.Loki.authToken
HOST
setting. When using the Loki Operator, you must now only use the FORWARD
setting.
Bug fixes
Previously, when the Operator was installed from the CLI, the
Role
andRoleBinding
that are necessary for the Cluster Monitoring Operator to read the metrics were not installed as expected. The issue did not occur when the operator was installed from the web console. Now, either way of installing the Operator installs the requiredRole
andRoleBinding
. (NETOBSERV-1003)Since version 1.2, the Network Observability Operator can raise alerts when a problem occurs with the flows collection. Previously, due to a bug, the related configuration to disable alerts,
spec.processor.metrics.disableAlerts
was not working as expected and sometimes ineffectual. Now, this configuration is fixed so that it is possible to disable the alerts. (NETOBSERV-976)Previously, when Network Observability was configured with
spec.loki.authToken
set toDISABLED
, only akubeadmin
cluster administrator was able to view network flows. Other types of cluster administrators received authorization failure. Now, any cluster administrator is able to view network flows. (NETOBSERV-972)Previously, a bug prevented users from setting
spec.consolePlugin.portNaming.enable
tofalse
. Now, this setting can be set tofalse
to disable port-to-service name translation. (NETOBSERV-971)Previously, the metrics exposed by the console plugin were not collected by the Cluster Monitoring Operator (Prometheus), due to an incorrect configuration. Now the configuration has been fixed so that the console plugin metrics are correctly collected and accessible from the OKD web console. (NETOBSERV-765)
Previously, when
processor.metrics.tls
was set toAUTO
in theFlowCollector
, theflowlogs-pipeline servicemonitor
did not adapt the appropriate TLS scheme, and metrics were not visible in the web console. Now the issue is fixed for AUTO mode. (NETOBSERV-1070)Previously, certificate configuration, such as used for Kafka and Loki, did not allow specifying a namespace field, implying that the certificates had to be in the same namespace where Network Observability is deployed. Moreover, when using Kafka with TLS/mTLS, the user had to manually copy the certificate(s) to the privileged namespace where the
eBPF
agent pods are deployed and manually manage certificate updates, such as in the case of certificate rotation. Now, Network Observability setup is simplified by adding a namespace field for certificates in theFlowCollector
resource. As a result, users can now install Loki or Kafka in different namespaces without needing to manually copy their certificates in the Network Observability namespace. The original certificates are watched so that the copies are automatically updated when needed. (NETOBSERV-773)Previously, the SCTP, ICMPv4 and ICMPv6 protocols were not covered by the Network Observability agents, resulting in a less comprehensive network flows coverage. These protocols are now recognized to improve the flows coverage. (NETOBSERV-934)
Known issues
When
processor.metrics.tls
is set toPROVIDED
in theFlowCollector
, theflowlogs-pipeline
servicemonitor
is not adapted to the TLS scheme. (NETOBSERV-1087)Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the
flowlogs-pipeline
pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater.(NETOBSERV-980)
Network Observability Operator 1.2.0
The following advisory is available for the Network Observability Operator 1.2.0:
Preparing for the next update
The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. Until the 1.2 release of the Network Observability Operator, the only channel available was v1.0.x
. The 1.2 release of the Network Observability Operator introduces the stable
update channel for tracking and receiving updates. You must switch your channel from v1.0.x
to stable
to receive future Operator updates. The v1.0.x
channel is deprecated and planned for removal in a following release.
New features and enhancements
Histogram in Traffic Flows view
- You can now choose to show a histogram bar chart of flows over time. The histogram enables you to visualize the history of flows without hitting the Loki query limit. For more information, see Using the histogram.
Conversation tracking
- You can now query flows by Log Type, which enables grouping network flows that are part of the same conversation. For more information, see Working with conversations.
Network Observability health alerts
- The Network Observability Operator now creates automatic alerts if the
flowlogs-pipeline
is dropping flows because of errors at the write stage or if the Loki ingestion rate limit has been reached. For more information, see Viewing health information.
Bug fixes
Previously, after changing the
namespace
value in the FlowCollector spec,eBPF
agent pods running in the previous namespace were not appropriately deleted. Now, the pods running in the previous namespace are appropriately deleted. (NETOBSERV-774)Previously, after changing the
caCert.name
value in the FlowCollector spec (such as in Loki section), FlowLogs-Pipeline pods and Console plug-in pods were not restarted, therefore they were unaware of the configuration change. Now, the pods are restarted, so they get the configuration change. (NETOBSERV-772)Previously, network flows between pods running on different nodes were sometimes not correctly identified as being duplicates because they are captured by different network interfaces. This resulted in over-estimated metrics displayed in the console plug-in. Now, flows are correctly identified as duplicates, and the console plug-in displays accurate metrics. (NETOBSERV-755)
The “reporter” option in the console plug-in is used to filter flows based on the observation point of either source node or destination node. Previously, this option mixed the flows regardless of the node observation point. This was due to network flows being incorrectly reported as Ingress or Egress at the node level. Now, the network flow direction reporting is correct. The “reporter” option filters for source observation point, or destination observation point, as expected. (NETOBSERV-696)
Previously, for agents configured to send flows directly to the processor as gRPC+protobuf requests, the submitted payload could be too large and is rejected by the processors’ GRPC server. This occurred under very-high-load scenarios and with only some configurations of the agent. The agent logged an error message, such as: grpc: received message larger than max. As a consequence, there was information loss about those flows. Now, the gRPC payload is split into several messages when the size exceeds a threshold. As a result, the server maintains connectivity. (NETOBSERV-617)
Known issue
- In the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate transition periodically affects the
flowlogs-pipeline
pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate transition. (NETOBSERV-980)
Notable technical changes
- Previously, you could install the Network Observability Operator using a custom namespace. This release introduces the
conversion webhook
which changes theClusterServiceVersion
. Because of this change, all the available namespaces are no longer listed. Additionally, to enable Operator metrics collection, namespaces that are shared with other Operators, like theopenshift-operators
namespace, cannot be used. Now, the Operator must be installed in theopenshift-netobserv-operator
namespace. You cannot automatically upgrade to the new Operator version if you previously installed the Network Observability Operator using a custom namespace. If you previously installed the Operator using a custom namespace, you must delete the instance of the Operator that was installed and re-install your operator in theopenshift-netobserv-operator
namespace. It is important to note that custom namespaces, such as the commonly usednetobserv
namespace, are still possible for theFlowCollector
, Loki, Kafka, and other plug-ins. (NETOBSERV-907)(NETOBSERV-956)
Network Observability Operator 1.1.0
The following advisory is available for the Network Observability Operator 1.1.0:
The Network Observability Operator is now stable and the release channel is upgraded to v1.1.0
.
Bug fix
- Previously, unless the Loki
authToken
configuration was set toFORWARD
mode, authentication was no longer enforced, allowing any user who could connect to the OKD console in an OKD cluster to retrieve flows without authentication. Now, regardless of the LokiauthToken
mode, only cluster administrators can retrieve flows. (BZ#2169468)