Gathering data about your cluster
You can use the following tools to get debugging information about your OKD cluster.
About the must-gather tool
The oc adm must-gather
CLI command collects the information from your cluster that is most likely needed for debugging issues, including:
Resource definitions
Service logs
By default, the oc adm must-gather
command uses the default plugin image and writes into ./must-gather.local
.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the
--image
argument with an image, as listed in a following section.For example:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.0
To collect the audit logs, use the
-- /usr/bin/gather_audit_logs
argument, as described in a following section.For example:
$ oc adm must-gather -- /usr/bin/gather_audit_logs
Audit logs are not collected as part of the default set of information to reduce the size of the files.
When you run oc adm must-gather
, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local
in the current working directory.
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE
...
openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
...
Optionally, you can run the oc adm must-gather
command in a specific namespace by using the --run-namespace
option.
For example:
$ oc adm must-gather --run-namespace <namespace> \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.0
Gathering data about specific features
You can gather debugging information about specific features by using the oc adm must-gather
CLI command with the --image
or --image-stream
argument. The must-gather
tool supports multiple images, so you can gather data about more than one feature by running a single command.
Image | Purpose |
---|---|
| Data collection for KubeVirt. |
| Data collection for Knative. |
| Data collection for service mesh. |
| Data collection for migration-related information. |
| Data collection for OpenShift Data Foundation. |
| Data collection for OpenShift Logging. |
| Data collection for Local Storage Operator. |
| Data collection for the Secrets Store CSI Driver Operator. |
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.The OKD CLI (
oc
) is installed.
Procedure
Navigate to the directory where you want to store the
must-gather
data.Run the
oc adm must-gather
command with one or more--image
or--image-stream
arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:$ oc adm must-gather \
--image-stream=openshift/must-gather \ (1)
--image=quay.io/kubevirt/must-gather (2)
1 The default OKD must-gather
image2 The must-gather image for KubeVirt
Additional resources
Gathering debugging data for the Custom Metrics Autoscaler.
Gathering network logs
You can gather network logs on all nodes in a cluster.
Procedure
Run the
oc adm must-gather
command with-- gather_network_logs
:$ oc adm must-gather -- gather_network_logs
By default, the |
Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:$ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 (1)
1 Replace must-gather-local.472290403699006248
with the actual directory name.Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.
Querying bootstrap node journal logs
If you experience bootstrap-related issues, you can gather bootkube.service
journald
unit logs and container logs from the bootstrap node.
Prerequisites
You have SSH access to your bootstrap node.
You have the fully qualified domain name of the bootstrap node.
Procedure
Query
bootkube.service
journald
unit logs from a bootstrap node during OKD installation. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
The
bootkube.service
log on the bootstrap node outputs etcdconnection refused
errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.Collect logs from the bootstrap node containers using
podman
on the bootstrap node. Replace<bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
Querying cluster node journal logs
You can gather journald
unit logs and other logs within /var/log
on individual cluster nodes.
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.Your API service is still functional.
You have installed the OpenShift CLI (
oc
).You have SSH access to your hosts.
Procedure
Query
kubelet
journald
unit logs from OKD cluster nodes. The following example queries control plane nodes only:$ oc adm node-logs --role=master -u kubelet (1)
1 Replace kubelet
as appropriate to query other unit logs.Collect logs from specific subdirectories under
/var/log/
on cluster nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/openshift-apiserver/
on all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver
Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/openshift-apiserver/audit.log
contents from all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/openshift-apiserver/audit.log
:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
OKD 4 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OKD API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.
Collecting a host network trace
Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time.
You can use a combination of the oc adm must-gather
command and the quay.io/openshift/origin-network-tools:latest
container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues.
The oc adm must-gather
command is used to run the tcpdump
command in pods on specific nodes. The tcpdump
command records the packet captures in the pods. When the tcpdump
command exits, the oc adm must-gather
command transfers the files with the packet captures from the pods to your client machine.
The sample command in the following procedure demonstrates performing a packet capture with the |
Prerequisites
You are logged in to OKD as a user with the
cluster-admin
role.You have installed the OpenShift CLI (
oc
).
Procedure
Run a packet capture from the host network on some nodes by running the following command:
$ oc adm must-gather \
--dest-dir /tmp/captures \ (1)
--source-dir '/tmp/tcpdump/' \ (2)
--image quay.io/openshift/origin-network-tools:latest \ (3)
--node-selector 'node-role.kubernetes.io/worker' \ (4)
--host-network=true \ (5)
--timeout 30s \ (6)
-- \
tcpdump -i any \ (7)
-w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300
1 The —dest-dir
argument specifies thatoc adm must-gather
stores the packet captures in directories that are relative to/tmp/captures
on the client machine. You can specify any writable directory.2 When tcpdump
is run in the debug pod thatoc adm must-gather
starts, the—source-dir
argument specifies that the packet captures are temporarily stored in the/tmp/tcpdump
directory on the pod.3 The —image
argument specifies a container image that includes thetcpdump
command.4 The —node-selector
argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the—node-name
argument instead to run the packet capture on a single node. If you omit both the—node-selector
and the—node-name
argument, the packet captures are performed on all nodes.5 The —host-network=true
argument is required so that the packet captures are performed on the network interfaces of the node.6 The —timeout
argument and value specify to run the debug pod for 30 seconds. If you do not specify the—timeout
argument and a duration, the debug pod runs for 10 minutes.7 The -i any
argument for thetcpdump
command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.
Review the packet capture files that
oc adm must-gather
transferred from the pods to your client machine:tmp/captures
├── event-filter.html
├── ip-10-0-192-217-ec2-internal (1)
│ └── quay.io/openshift/origin-network-tools:latest...
│ └── 2022-01-13T19:31:31.pcap
├── ip-10-0-201-178-ec2-internal (1)
│ └── quay.io/openshift/origin-network-tools:latest...
│ └── 2022-01-13T19:31:30.pcap
├── ip-...
└── timestamp
1 The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the —node-selector
argument, then the directory level for the hostname is not present.
About toolbox
toolbox
is a tool that starts a container on a Fedora CoreOS (FCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run your favorite debugging or admin tools.
Installing packages to a toolbox
container
By default, running the toolbox
command starts a container with the quay.io/fedora/fedora:36
image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.
Prerequisites
- You have accessed a node with the
oc debug node/<node_name>
command.
Procedure
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
Start the toolbox container:
# toolbox
Install the additional package, such as
wget
:# dnf install -y <package_name>
Starting an alternative image with toolbox
By default, running the toolbox
command starts a container with the quay.io/fedora/fedora:36
image. You can start an alternative image by creating a .toolboxrc
file and specifying the image to run.
Prerequisites
- You have accessed a node with the
oc debug node/<node_name>
command.
Procedure
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
Create a
.toolboxrc
file in the home directory for the root user ID:# vi ~/.toolboxrc
REGISTRY=quay.io (1)
IMAGE=fedora/fedora:33-x86_64 (2)
TOOLBOX_NAME=toolbox-fedora-33 (3)
1 Optional: Specify an alternative container registry. 2 Specify an alternative image to start. 3 Optional: Specify an alternative name for the toolbox container. Start a toolbox container with the alternative image:
# toolbox
If an existing
toolbox
pod is already running, thetoolbox
command outputs‘toolbox-‘ already exists. Trying to start…
. Remove the running toolbox container withpodman rm toolbox-
and spawn a new toolbox container, to avoid issues withsosreport
plugins.