- Troubleshooting OVN-Kubernetes
- Monitoring OVN-Kubernetes health by using readiness probes
- Viewing OVN-Kubernetes alerts in the console
- Viewing OVN-Kubernetes alerts in the CLI
- Viewing the OVN-Kubernetes logs using the CLI
- Viewing the OVN-Kubernetes logs using the web console
- Checking the OVN-Kubernetes pod network connectivity
- Additional resources
Troubleshooting OVN-Kubernetes
OVN-Kubernetes has many sources of built-in health checks and logs.
Monitoring OVN-Kubernetes health by using readiness probes
The ovnkube-master
and ovnkube-node
pods have containers configured with readiness probes.
Prerequisites
Access to the OpenShift CLI (
oc
).You have access to the cluster with
cluster-admin
privileges.You have installed
jq
.
Procedure
Review the details of the
ovnkube-master
readiness probe by running the following command:$ oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \
-o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'
The readiness probe for the northbound and southbound database containers in the
ovnkube-master
pod checks for the health of the Raft cluster hosting the databases.Review the details of the
ovnkube-node
readiness probe by running the following command:$ oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master \
-o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'
The
ovnkube-node
container in theovnkube-node
pod has a readiness probe to verify the presence of the ovn-kubernetes CNI configuration file, the absence of which would indicate that the pod is not running or is not ready to accept requests to configure pods.Show all events including the probe failures, for the namespace by using the following command:
$ oc get events -n openshift-ovn-kubernetes
Show the events for just this pod:
$ oc describe pod ovnkube-master-tp2z8 -n openshift-ovn-kubernetes
Show the messages and statuses from the cluster network operator:
$ oc get co/network -o json | jq '.status.conditions[]'
Show the
ready
status of each container inovnkube-master
pods by running the following script:$ for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \
-o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === $p ===; \
oc get pods -n openshift-ovn-kubernetes $p -o json | jq '.status.containerStatuses[] | .name, .ready'; \
done
The expectation is all container statuses are reporting as
true
. Failure of a readiness probe sets the status tofalse
.
Additional resources
Viewing OVN-Kubernetes alerts in the console
The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
Procedure (UI)
In the Administrator perspective, select Observe → Alerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting Rules pages.
View the rules for OVN-Kubernetes alerts by selecting Observe → Alerting → Alerting Rules.
Viewing OVN-Kubernetes alerts in the CLI
You can get information about alerts and their governing alerting rules and silences from the command line.
Prerequisites
Access to the cluster as a user with the
cluster-admin
role.The OpenShift CLI (
oc
) installed.You have installed
jq
.
Procedure
View active or firing alerts by running the following commands.
Set the alert manager route environment variable by running the following command:
$ ALERT_MANAGER=$(oc get route alertmanager-main -n openshift-monitoring \
-o jsonpath='{@.spec.host}')
Issue a
curl
request to the alert manager route API with the correct authorization details requesting specific fields by running the following command:$ curl -s -k -H "Authorization: Bearer \
$(oc create token prometheus-k8s -n openshift-monitoring)" \
https://$ALERT_MANAGER/api/v1/alerts \
| jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"'
View alerting rules by running the following command:
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")'
Viewing the OVN-Kubernetes logs using the CLI
You can view the logs for each of the pods in the ovnkube-master
and ovnkube-node
pods using the OpenShift CLI (oc
).
Prerequisites
Access to the cluster as a user with the
cluster-admin
role.Access to the OpenShift CLI (
oc
).You have installed
jq
.
Procedure
View the log for a specific pod:
$ oc logs -f <pod_name> -c <container_name> -n <namespace>
where:
-f
Optional: Specifies that the output follows what is being written into the logs.
<pod_name>
Specifies the name of the pod.
<container_name>
Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name.
<namespace>
Specify the namespace the pod is running in.
For example:
$ oc logs ovnkube-master-7h4q7 -n openshift-ovn-kubernetes
$ oc logs -f ovnkube-master-7h4q7 -n openshift-ovn-kubernetes -c ovn-dbchecker
The contents of log files are printed out.
Examine the most recent entries in all the containers in the
ovnkube-master
pods:$ for p in $(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes \
-o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \
do echo === $p ===; for container in $(oc get pods -n openshift-ovn-kubernetes $p \
-o json | jq -r '.status.containerStatuses[] | .name');do echo ---$container---; \
oc logs -c $container $p -n openshift-ovn-kubernetes --tail=5; done; done
View the last 5 lines of every log in every container in an
ovnkube-master
pod using the following command:$ oc logs -l app=ovnkube-master -n openshift-ovn-kubernetes --all-containers --tail 5
Viewing the OVN-Kubernetes logs using the web console
You can view the logs for each of the pods in the ovnkube-master
and ovnkube-node
pods in the web console.
Prerequisites
- Access to the OpenShift CLI (
oc
).
Procedure
In the OKD console, navigate to Workloads → Pods or navigate to the pod through the resource you want to investigate.
Select the
openshift-ovn-kubernetes
project from the drop-down menu.Click the name of the pod you want to investigate.
Click Logs. By default for the
ovnkube-master
the logs associated with thenorthd
container are displayed.Use the down-down menu to select logs for each container in turn.
Changing the OVN-Kubernetes log levels
The default log level for OVN-Kubernetes is 2. To debug OVN-Kubernetes set the log level to 5. Follow this procedure to increase the log level of the OVN-Kubernetes to help you debug an issue.
Prerequisites
You have access to the cluster with
cluster-admin
privileges.You have access to the OpenShift Container Platform web console.
Procedure
Run the following command to get detailed information for all pods in the OVN-Kubernetes project:
$ oc get po -o wide -n openshift-ovn-kubernetes
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ovnkube-master-84nc9 6/6 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none>
ovnkube-master-gmlqv 6/6 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none>
ovnkube-master-nhts2 6/6 Running 1 (48m ago) 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none>
ovnkube-node-2cbh8 5/5 Running 0 43m 10.0.217.114 ip-10-0-217-114.ec2.internal <none> <none>
ovnkube-node-6fvzl 5/5 Running 0 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none>
ovnkube-node-f4lzz 5/5 Running 0 24m 10.0.146.76 ip-10-0-146-76.ec2.internal <none> <none>
ovnkube-node-jf67d 5/5 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none>
ovnkube-node-np9mf 5/5 Running 0 40m 10.0.165.191 ip-10-0-165-191.ec2.internal <none> <none>
ovnkube-node-qjldg 5/5 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none>
Create a
ConfigMap
file similar to the following example and use a filename such asenv-overrides.yaml
:Example
ConfigMap
filekind: ConfigMap
apiVersion: v1
metadata:
name: env-overrides
namespace: openshift-ovn-kubernetes
data:
ip-10-0-217-114.ec2.internal: | (1)
# This sets the log level for the ovn-kubernetes node process:
OVN_KUBE_LOG_LEVEL=5
# You might also/instead want to enable debug logging for ovn-controller:
OVN_LOG_LEVEL=dbg
ip-10-0-209-180.ec2.internal: |
# This sets the log level for the ovn-kubernetes node process:
OVN_KUBE_LOG_LEVEL=5
# You might also/instead want to enable debug logging for ovn-controller:
OVN_LOG_LEVEL=dbg
_master: | (2)
# This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker:
OVN_KUBE_LOG_LEVEL=5
# You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters:
OVN_LOG_LEVEL=dbg
1 Specify the name of the node you want to set the debug log level on. 2 Specify _master
to set the log levels ofovnkube-master
components.Apply the
ConfigMap
file by using the following command:$ oc create configmap env-overrides.yaml -n openshift-ovn-kubernetes
Example output
configmap/env-overrides.yaml created
Restart the
ovnkube
pods to apply the new log level by using the following commands:$ oc delete pod -n openshift-ovn-kubernetes \
--field-selector spec.nodeName=ip-10-0-217-114.ec2.internal -l app=ovnkube-node
$ oc delete pod -n openshift-ovn-kubernetes \
--field-selector spec.nodeName=ip-10-0-209-180.ec2.internal -l app=ovnkube-node
$ oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-master
Checking the OVN-Kubernetes pod network connectivity
The connectivity check controller, in OKD 4.10 and later, orchestrates connection verification checks in your cluster. These include Kubernetes API, OpenShift API and individual nodes. The results for the connection tests are stored in PodNetworkConnectivity
objects in the openshift-network-diagnostics
namespace. Connection tests are performed every minute in parallel.
Prerequisites
Access to the OpenShift CLI (
oc
).Access to the cluster as a user with the
cluster-admin
role.You have installed
jq
.
Procedure
To list the current
PodNetworkConnectivityCheck
objects, enter the following command:$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics
View the most recent success for each connection object by using the following command:
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \
-o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'
View the most recent failures for each connection object by using the following command:
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \
-o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'
View the most recent outages for each connection object by using the following command:
$ oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \
-o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'
The connectivity check controller also logs metrics from these checks into Prometheus.
View all the metrics by running the following command:
$ oc exec prometheus-k8s-0 -n openshift-monitoring -- \
promtool query instant http://localhost:9090 \
'{component="openshift-network-diagnostics"}'
View the latency between the source pod and the openshift api service for the last 5 minutes:
$ oc exec prometheus-k8s-0 -n openshift-monitoring -- \
promtool query instant http://localhost:9090 \
'{component="openshift-network-diagnostics"}'