FlowCollector configuration parameters
FlowCollector is the Schema for the flowcollectors API, which pilots and configures netflow collection.
FlowCollector API specifications
Type
object
Property | Type | Description |
---|---|---|
|
| APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources |
|
| Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds |
|
| Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata |
|
| FlowCollectorSpec defines the desired state of FlowCollector |
|
| FlowCollectorStatus defines the observed state of FlowCollector |
.spec
Description
FlowCollectorSpec defines the desired state of FlowCollector
Type
object
Required
agent
deploymentModel
Property | Type | Description |
---|---|---|
|
| agent for flows extraction. |
|
| consolePlugin defines the settings related to the OKD Console plugin, when available. |
|
| deploymentModel defines the desired type of deployment for flow processing. Possible values are “DIRECT” (default) to make the flow processor listening directly from the agents, or “KAFKA” to make flows sent to a Kafka pipeline before consumption by the processor. Kafka can provide better scalability, resiliency and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka). |
|
| exporters defines additional optional exporters for custom consumption or storage. This is an experimental feature. Currently, only KAFKA exporter is available. |
|
| FlowCollectorExporter defines an additional exporter to send enriched flows to |
|
| Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the “spec.deploymentModel” is “KAFKA”. |
|
| Loki, the flow store, client settings. |
|
| namespace where NetObserv pods are deployed. If empty, the namespace of the operator is going to be used. |
|
| processor defines the settings of the component that receives the flows from the agent, enriches them, and forwards them to the Loki persistence layer. |
.spec.agent
Description
agent for flows extraction.
Type
object
Required
type
Property | Type | Description |
---|---|---|
|
| ebpf describes the settings related to the eBPF-based flow reporter when the “agent.type” property is set to “EBPF”. |
|
| ipfix describes the settings related to the IPFIX-based flow reporter when the “agent.type” property is set to “IPFIX”. |
|
| type selects the flows tracing agent. Possible values are “EBPF” (default) to use NetObserv eBPF agent, “IPFIX” to use the legacy IPFIX collector. “EBPF” is recommended in most cases as it offers better performances and should work regardless of the CNI installed on the cluster. “IPFIX” works with OVN-Kubernetes CNI (other CNIs could work if they support exporting IPFIX, but they would require manual configuration). |
.spec.agent.ebpf
Description
ebpf describes the settings related to the eBPF-based flow reporter when the “agent.type” property is set to “EBPF”.
Type
object
Property | Type | Description |
---|---|---|
|
| cacheActiveTimeout is the max period during which the reporter will aggregate flows before sending. Increasing |
|
| cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows. Increasing |
|
| Debug allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk. |
|
| excludeInterfaces contains the interface names that will be excluded from flow tracing. If an entry is enclosed by slashes, such as |
|
| imagePullPolicy is the Kubernetes pull policy for the image defined above |
|
| interfaces contains the interface names from where flows will be collected. If empty, the agent will fetch all the interfaces in the system, excepting the ones listed in ExcludeInterfaces. If an entry is enclosed by slashes, for example |
|
| kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB. |
|
| logLevel defines the log level for the NetObserv eBPF Agent |
|
| privileged mode for the eBPF Agent container. In general this setting can be ignored or set to false: in that case, the operator will set granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container, to enable its correct operation. If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF is in use, then you can turn on this mode for more global privileges. |
|
| resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
|
| sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled. |
.spec.agent.ebpf.debug
Description
Debug allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk.
Type
object
Property | Type | Description |
---|---|---|
|
| env allows passing custom environment variables to the NetObserv Agent. Useful for passing some very concrete performance-tuning options, such as GOGC, GOMAXPROCS, that shouldn’t be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. |
.spec.agent.ebpf.resources
Description
resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Type
object
Property | Type | Description |
---|---|---|
|
| Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
|
| Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
.spec.agent.ipfix
Description
ipfix describes the settings related to the IPFIX-based flow reporter when the “agent.type” property is set to “IPFIX”.
Type
object
Property | Type | Description |
---|---|---|
|
| cacheActiveTimeout is the max period during which the reporter will aggregate flows before sending |
|
| cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows |
|
| clusterNetworkOperator defines the settings related to the OKD Cluster Network Operator, when available. |
|
| forceSampleAll allows disabling sampling in the IPFIX-based flow reporter. It is not recommended to sample all the traffic with IPFIX, as it might generate cluster instability. If you REALLY want to do that, set this flag to true. Use at your own risk. When it is set to true, the value of “sampling” is ignored. |
|
| ovnKubernetes defines the settings of the OVN-Kubernetes CNI, when available. This configuration is used when using OVN’s IPFIX exports, without OKD. When using OKD, refer to the |
|
| sampling is the sampling rate on the reporter. 100 means one flow on 100 is sent. To ensure cluster stability, it is not possible to set a value below 2. If you really want to sample every packet, which might impact the cluster stability, refer to “forceSampleAll”. Alternatively, you can use the eBPF Agent instead of IPFIX. |
.spec.agent.ipfix.clusterNetworkOperator
Description
clusterNetworkOperator defines the settings related to the OKD Cluster Network Operator, when available.
Type
object
Property | Type | Description |
---|---|---|
|
| namespace where the config map is going to be deployed. |
.spec.agent.ipfix.ovnKubernetes
Description
ovnKubernetes defines the settings of the OVN-Kubernetes CNI, when available. This configuration is used when using OVN’s IPFIX exports, without OKD. When using OKD, refer to the clusterNetworkOperator
property instead.
Type
object
Property | Type | Description |
---|---|---|
|
| containerName defines the name of the container to configure for IPFIX. |
|
| daemonSetName defines the name of the DaemonSet controlling the OVN-Kubernetes pods. |
|
| namespace where OVN-Kubernetes pods are deployed. |
.spec.consolePlugin
Description
consolePlugin defines the settings related to the OKD Console plugin, when available.
Type
object
Required
register
Property | Type | Description |
---|---|---|
|
| autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. |
|
| imagePullPolicy is the Kubernetes pull policy for the image defined above |
|
| logLevel for the console plugin backend |
|
| port is the plugin service port |
|
| portNaming defines the configuration of the port-to-service name translation |
|
| quickFilters configures quick filter presets for the Console plugin |
|
| QuickFilter defines preset configuration for Console’s quick filters |
|
| register allows, when set to true, to automatically register the provided console plugin with the OKD Console operator. When set to false, you can still register it manually by editing console.operator.OKD.io/cluster with the following command: |
|
| replicas defines the number of replicas (pods) to start. |
|
| resources, in terms of compute resources, required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
.spec.consolePlugin.autoscaler
Description
autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Please refer to HorizontalPodAutoscaler documentation (autoscaling/v2)
.spec.consolePlugin.portNaming
Description
portNaming defines the configuration of the port-to-service name translation
Type
object
Property | Type | Description |
---|---|---|
|
| enable the console plugin port-to-service name translation |
|
| portNames defines additional port names to use in the console, for example, portNames: {“3100”: “loki”} |
.spec.consolePlugin.quickFilters
Description
quickFilters configures quick filter presets for the Console plugin
Type
array
.spec.consolePlugin.quickFilters[]
Description
QuickFilter defines preset configuration for Console’s quick filters
Type
object
Required
filter
name
Property | Type | Description |
---|---|---|
|
| default defines whether this filter should be active by default or not |
|
| filter is a set of keys and values to be set when this filter is selected. Each key can relate to a list of values using a coma-separated string, for example, filter: {“src_namespace”: “namespace1,namespace2”} |
|
| name of the filter, that will be displayed in Console |
.spec.consolePlugin.resources
Description
resources, in terms of compute resources, required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Type
object
Property | Type | Description |
---|---|---|
|
| Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
|
| Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
.spec.exporters
Description
exporters defines additional optional exporters for custom consumption or storage. This is an experimental feature. Currently, only KAFKA exporter is available.
Type
array
.spec.exporters[]
Description
FlowCollectorExporter defines an additional exporter to send enriched flows to
Type
object
Required
type
Property | Type | Description |
---|---|---|
|
| kafka describes the kafka configuration (address, topic…) to send enriched flows to. |
|
| type selects the type of exporters. Only “KAFKA” is available at the moment. |
.spec.exporters[].kafka
Description
describes the kafka configuration, such as address or topic, to send enriched flows.
Type
object
Required
address
topic
Property | Type | Description |
---|---|---|
|
| address of the Kafka server |
|
| tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged). |
|
| kafka topic to use. It must exist, NetObserv will not create it. |
.spec.exporters[].kafka.tls
Description
tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged).
Type
object
Property | Type | Description |
---|---|---|
|
| caCert defines the reference of the certificate for the Certificate Authority |
|
| enable TLS |
|
| insecureSkipVerify allows skipping client-side verification of the server certificate If set to true, CACert field will be ignored |
|
| userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS) |
.spec.exporters[].kafka.tls.caCert
Description
caCert defines the reference of the certificate for the Certificate Authority
Type
object
Property | Type | Description |
---|---|---|
|
| certFile defines the path to the certificate file name within the config map / Secret |
|
| certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary. |
|
| name of the config map or Secret containing certificates |
|
| type for the certificate reference: config map or secret |
.spec.exporters[].kafka.tls.userCert
Description
userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)
Type
object
Property | Type | Description |
---|---|---|
|
| certFile defines the path to the certificate file name within the config map / Secret |
|
| certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary. |
|
| name of the config map or Secret containing certificates |
|
| type for the certificate reference: config map or secret |
.spec.kafka
Description
kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the “spec.deploymentModel” is “KAFKA”.
Type
object
Required
address
topic
Property | Type | Description |
---|---|---|
|
| address of the Kafka server |
|
| tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged). |
|
| kafka topic to use. It must exist, NetObserv will not create it. |
.spec.kafka.tls
Description
tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged).
Type
object
Property | Type | Description |
---|---|---|
|
| caCert defines the reference of the certificate for the Certificate Authority |
|
| enable TLS |
|
| insecureSkipVerify allows skipping client-side verification of the server certificate If set to true, CACert field will be ignored |
|
| userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS) |
.spec.kafka.tls.caCert
Description
caCert defines the reference of the certificate for the Certificate Authority
Type
object
Property | Type | Description |
---|---|---|
|
| certFile defines the path to the certificate file name within the config map / Secret |
|
| certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary. |
|
| name of the config map or Secret containing certificates |
|
| type for the certificate reference: config map or secret |
.spec.kafka.tls.userCert
Description
userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)
Type
object
Property | Type | Description |
---|---|---|
|
| certFile defines the path to the certificate file name within the config map / Secret |
|
| certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary. |
|
| name of the config map or Secret containing certificates |
|
| type for the certificate reference: config map or secret |
.spec.loki
Description
loki, the flow store, client settings.
Type
object
Property | Type | Description |
---|---|---|
|
| AuthToken describe the way to get a token to authenticate to Loki DISABLED will not send any token with the request HOST will use the local pod service account to authenticate to Loki FORWARD will forward user token, in this mode, pod that are not receiving user request like the processor will use the local pod service account. Similar to HOST mode. |
|
| batchSize is max batch size (in bytes) of logs to accumulate before sending |
|
| batchWait is max time to wait before sending a batch |
|
| maxBackoff is the maximum backoff time for client connection between retries |
|
| maxRetries is the maximum number of retries for client connections |
|
| minBackoff is the initial backoff time for client connection between retries |
|
| querierURL specifies the address of the Loki querier service, in case it is different from the Loki ingester URL. If empty, the URL value will be used (assuming that the Loki ingester and querier are in the same server). + [IMPORTANT] ==== If you installed Loki using the Loki Operator, it is advised not to use |
|
| staticLabels is a map of common labels to set on each flow |
|
| statusURL specifies the address of the Loki /ready /metrics /config endpoints, in case it is different from the Loki querier URL. If empty, the QuerierURL value will be used. This is useful to show error messages and some context in the frontend |
|
| tenantID is the Loki X-Scope-OrgID that identifies the tenant for each request. it will be ignored if instanceSpec is specified |
|
| timeout is the maximum time connection / request limit A Timeout of zero means no timeout. |
|
| tls client configuration. |
|
| url is the address of an existing Loki service to push the flows to. |
.spec.loki.tls
Description
tls client configuration.
Type
object
Property | Type | Description |
---|---|---|
|
| caCert defines the reference of the certificate for the Certificate Authority |
|
| enable TLS |
|
| insecureSkipVerify allows skipping client-side verification of the server certificate If set to true, CACert field will be ignored |
|
| userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS) |
.spec.loki.tls.caCert
Description
caCert defines the reference of the certificate for the Certificate Authority
Type
object
Property | Type | Description |
---|---|---|
|
| certFile defines the path to the certificate file name within the config map / Secret |
|
| certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary. |
|
| name of the config map or Secret containing certificates |
|
| type for the certificate reference: config map or secret |
.spec.loki.tls.userCert
Description
userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)
Type
object
Property | Type | Description |
---|---|---|
|
| certFile defines the path to the certificate file name within the config map / Secret |
|
| certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary. |
|
| name of the config map or Secret containing certificates |
|
| type for the certificate reference: config map or secret |
.spec.processor
Description
processor defines the settings of the component that receives the flows from the agent, enriches them, and forwards them to the Loki persistence layer.
Type
object
Property | Type | Description |
---|---|---|
|
| Debug allows setting some aspects of the internal configuration of the flow processor. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk. |
|
| dropUnusedFields allows, when set to true, to drop fields that are known to be unused by OVS, to save storage space. |
|
| enableKubeProbes is a flag to enable or disable Kubernetes liveness and readiness probes |
|
| healthPort is a collector HTTP port in the Pod that exposes the health check API |
|
| imagePullPolicy is the Kubernetes pull policy for the image defined above |
|
| kafkaConsumerAutoscaler spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer, which consumes Kafka messages. This setting is ignored when Kafka is disabled. |
|
| kafkaConsumerBatchSize indicates to the broker the maximum batch size, in bytes, that the consumer will accept. Ignored when not using Kafka. Default: 10MB. |
|
| kafkaConsumerQueueCapacity defines the capacity of the internal message queue used in the Kafka consumer client. Ignored when not using Kafka. |
|
| kafkaConsumerReplicas defines the number of replicas (pods) to start for flowlogs-pipeline-transformer, which consumes Kafka messages. This setting is ignored when Kafka is disabled. |
|
| logLevel of the collector runtime |
|
| Metrics define the processor configuration regarding metrics |
|
| port of the flow collector (host port) By conventions, some value are not authorized port must not be below 1024 and must not equal this values: 4789,6081,500, and 4500 |
|
| profilePort allows setting up a Go pprof profiler listening to this port |
|
| resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
.spec.processor.debug
Description
Debug allows setting some aspects of the internal configuration of the flow processor. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk.
Type
object
Property | Type | Description |
---|---|---|
|
| env allows passing custom environment variables to the NetObserv Agent. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS that shouldn’t be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug and support scenarios. |
.spec.processor.kafkaConsumerAutoscaler
Description
kafkaConsumerAutoscaler spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer, which consumes Kafka messages. This setting is ignored when Kafka is disabled. Please refer to HorizontalPodAutoscaler documentation (autoscaling/v2)
.spec.processor.metrics
Description
Metrics define the processor configuration regarding metrics
Type
object
Property | Type | Description |
---|---|---|
|
| ignoreTags is a list of tags to specify which metrics to ignore |
|
| metricsServer endpoint configuration for Prometheus scraper |
.spec.processor.metrics.server
Description
metricsServer endpoint configuration for Prometheus scraper
Type
object
Property | Type | Description |
---|---|---|
|
| the prometheus HTTP port |
|
| TLS configuration. |
.spec.processor.metrics.server.tls
Description
TLS configuration.
Type
object
Property | Type | Description |
---|---|---|
|
| TLS configuration. |
|
| Select the type of TLS configuration “DISABLED” (default) to not configure TLS for the endpoint, “PROVIDED” to manually provide cert file and a key file, and “AUTO” to use OKD auto generated certificate using annotations |
.spec.processor.metrics.server.tls.provided
Description
TLS configuration.
Type
object
Property | Type | Description |
---|---|---|
|
| certFile defines the path to the certificate file name within the config map / Secret |
|
| certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary. |
|
| name of the config map or Secret containing certificates |
|
| type for the certificate reference: config map or secret |
.spec.processor.resources
Description
resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Type
object
Property | Type | Description |
---|---|---|
|
| Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
|
| Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
.status
Description
FlowCollectorStatus defines the observed state of FlowCollector
Type
object
Required
conditions
Property | Type | Description |
---|---|---|
|
| conditions represent the latest available observations of an object’s state |
|
| Condition contains details for one aspect of the current state of this API Resource. —- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo’s current state. // Known .status.conditions.type are: “Available”, “Progressing”, and “Degraded” // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition |
|
| namespace where console plugin and flowlogs-pipeline have been deployed. |
.status.conditions
Description
conditions represent the latest available observations of an object’s state
Type
array
.status.conditions[]
Description
Condition contains details for one aspect of the current state of this API Resource. —- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo’s current state. // Known .status.conditions.type are: “Available”, “Progressing”, and “Degraded” // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"
// other fields }
Type
object
Required
lastTransitionTime
message
reason
status
type
Property | Type | Description |
---|---|---|
|
| lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. |
|
| message is a human readable message indicating details about the transition. This might be an empty string. |
|
| observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. |
|
| reason contains a programmatic identifier indicating the reason for the condition’s last transition. Producers of specific condition types might define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field might not be empty. |
|
| status of the condition, one of True, False, Unknown. |
|
| type of condition in CamelCase or in foo.example.com/CamelCase. —- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deescalate is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) |