Debug

This page contains instructions on how to debug KubeVirt.

This is useful to both KubeVirt developers and advanced users that would like to gain deep understanding on what’s happening behind the scenes.

Log Verbosity

KubeVirt produces a lot of logging throughout its codebase. Some log entries have a verbosity level defined to them. The verbosity level that’s defined for a log entry determines the minimum verbosity level in order to expose the log entry.

In code, the log entry looks similar to: log.Log.V(verbosity).Infof("...") while verbosity is the minimum verbosity level for this entry.

For example, if the log verbosity for some log entry is 3, then the log would be exposed only if the log verbosity is defined to be equal or greater than 3, or else it would be filtered out.

Currently, log verbosity can be defined per-component or per-node. The most updated API is detailed here.

Setting verbosity per KubeVirt component

One way of raising log verbosity is to manually determine it for the different components in KubeVirt CR:

  1. apiVersion: kubevirt.io/v1
  2. kind: KubeVirt
  3. metadata:
  4. name: kubevirt
  5. namespace: kubevirt
  6. spec:
  7. configuration:
  8. developerConfiguration:
  9. logVerbosity:
  10. virtLauncher: 2
  11. virtHandler: 3
  12. virtController: 4
  13. virtAPI: 5
  14. virtOperator: 6

This option is best for debugging specific components.

libvirt virtqemud.conf set log_filters according to virt-launcher log Verbosity

Verbosity levellog_filters in virtqemud.conf
5log_filters=”3:remote 4:event 3:util.json 3:util.object 3:util.dbus 3:util.netlink 3:node_device 3:rpc 3:access 3:util.threadjob 3:cpu.cpu 3:qemu.qemu_monitor 3:qemu.qemu_monitor_json 3:conf.domain_addr 1:
63:remote 4:event 3:util.json 3:util.object 3:util.dbus 3:util.netlink 3:node_device 3:rpc 3:access 3:util.threadjob 3:cpu.cpu 3:qemu.qemu_monitor 1:
73:remote 4:event 3:util.json 3:util.object 3:util.dbus 3:util.netlink 3:node_device 3:rpc 3:access 3:util.threadjob 3:cpu.cpu 1:
8 and above3:remote 4:event 3:util.json 3:util.object 3:util.dbus 3:util.netlink 3:node_device 3:rpc 3:access 1:

User can set self-defined log-filters via the annotations tag kubevirt.io/libvirt-log-filters in VMI configuration. e.g.

  1. kind: VirtualMachineInstance
  2. metadata:
  3. name: my-vmi
  4. annotations:
  5. kubevirt.io/libvirt-log-filters: "3:remote 4:event 1:*"

Setting verbosity per nodes

Another way is to set verbosity level per node:

  1. apiVersion: kubevirt.io/v1
  2. kind: KubeVirt
  3. metadata:
  4. name: kubevirt
  5. namespace: kubevirt
  6. spec:
  7. configuration:
  8. developerConfiguration:
  9. nodeVerbosity:
  10. "node01": 4
  11. "otherNodeName": 6

nodeVerbosity is essentially a map from string to int where the key is the node name and the value is the verbosity level. The verbosity level would be defined for all the different components in that node (e.g. virt-handler, virt-launcher, etc).

How to retrieve KubeVirt components’ logs

In Kubernetes, logs are defined at the Pod level. Therefore, first it’s needed to list the Pods of KubeVirt’s core components. In order to do that we can first list the Pods under KubeVirt’s install namespace.

For example:

  1. $> kubectl get pods -n <KubeVirt Install Namespace>
  2. NAME READY STATUS RESTARTS AGE
  3. disks-images-provider-7gqbc 1/1 Running 0 32m
  4. disks-images-provider-vg4kx 1/1 Running 0 32m
  5. virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m
  6. virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m
  7. virt-controller-76c784655f-7fp6m 1/1 Running 0 30m
  8. virt-controller-76c784655f-f4pbd 1/1 Running 0 30m
  9. virt-handler-2m86x 1/1 Running 0 30m
  10. virt-handler-9qs6z 1/1 Running 0 30m
  11. virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m
  12. virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m

Then, we can pick one of the pods and fetch its logs. For example:

  1. $> kubectl logs -n <KubeVirt Install Namespace> virt-handler-2m86x | head -n8
  2. {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"}
  3. {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"}
  4. {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"}
  5. {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"}
  6. {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"}
  7. {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"}

Obviously, for both examples above, <KubeVirt Install Namespace> needs to be replaced with the actual namespace KubeVirt is installed in.

KubeVirt PProf Profiler

Using the cluster-profiler client tool, a developer can get the PProf profiling data for every component in the Kubevirt Control plane. Here is a user guide:

Compile cluster-profiler

Build from source code

  1. $ git clone https://github.com/kubevirt/kubevirt.git
  2. $ cd kubevirt/tools/cluster-profiler
  3. $ go build

Enable the feature gate

Add ClusterProfiler in KubeVirt config

  1. $ cat << END > enable-feature-gate.yaml
  2. ---
  3. apiVersion: kubevirt.io/v1
  4. kind: KubeVirt
  5. metadata:
  6. name: kubevirt
  7. namespace: kubevirt
  8. spec:
  9. configuration:
  10. developerConfiguration:
  11. featureGates:
  12. - ClusterProfiler
  13. END
  14. $ kubectl apply -f enable-feature-gate.yaml

Do the profiling

Start CPU profiling

  1. $ cluster-profiler --cmd start
  2. 2023/05/17 09:31:09 SUCCESS: started cpu profiling KubeVirt control plane

Stop CPU profiling

  1. $ cluster-profiler --cmd stop
  2. 2023/05/17 09:31:14 SUCCESS: stopped cpu profiling KubeVirt control plane

Dump the pprof result

  1. $ cluster-profiler --cmd dump
  2. 2023/05/17 09:31:18 Moving already existing "cluster-profiler-results" => "cluster-profiler-results-old-67fq"
  3. SUCCESS: Dumped PProf 6 results for KubeVirt control plane to [cluster-profiler-results]

The PProf result can be found in the folder cluster-profiler-results

  1. $ tree cluster-profiler-results
  2. cluster-profiler-results
  3. ├── virt-api-5f96f84dcb-lkpb7
  4. ├── allocs.pprof
  5. ├── block.pprof
  6. ├── cpu.pprof
  7. ├── goroutine.pprof
  8. ├── heap.pprof
  9. ├── mutex.pprof
  10. └── threadcreate.pprof
  11. ├── virt-controller-5bbd9554d9-2f8j2
  12. ├── allocs.pprof
  13. ├── block.pprof
  14. ├── cpu.pprof
  15. ├── goroutine.pprof
  16. ├── heap.pprof
  17. ├── mutex.pprof
  18. └── threadcreate.pprof
  19. ├── virt-controller-5bbd9554d9-qct2w
  20. ├── allocs.pprof
  21. ├── block.pprof
  22. ├── cpu.pprof
  23. ├── goroutine.pprof
  24. ├── heap.pprof
  25. ├── mutex.pprof
  26. └── threadcreate.pprof
  27. ├── virt-handler-ccq6c
  28. ├── allocs.pprof
  29. ├── block.pprof
  30. ├── cpu.pprof
  31. ├── goroutine.pprof
  32. ├── heap.pprof
  33. ├── mutex.pprof
  34. └── threadcreate.pprof
  35. ├── virt-operator-cdc677b7-pg9j2
  36. ├── allocs.pprof
  37. ├── block.pprof
  38. ├── cpu.pprof
  39. ├── goroutine.pprof
  40. ├── heap.pprof
  41. ├── mutex.pprof
  42. └── threadcreate.pprof
  43. └── virt-operator-cdc677b7-pjqdx
  44. ├── allocs.pprof
  45. ├── block.pprof
  46. ├── cpu.pprof
  47. ├── goroutine.pprof
  48. ├── heap.pprof
  49. ├── mutex.pprof
  50. └── threadcreate.pprof