Troubleshooting

Overview

Here are some tips to troubleshoot a failed upgrade:

  • Check version-specific upgrade notes. You can click the version in the support matrix table to see if there are any known issues.
  • Dive into the upgrade design proposal. The following section briefly describes phases within an upgrade and possible diagnostic methods.

Diagnose the upgrade flow

A Harvester upgrade process contains several phases. Troubleshooting - 图1

Phase 1: Provision upgrade repository VM.

The Harvester controller downloads a Harvester release ISO file and uses it to provision a VM. During this phase you can see the upgrade status windows show:

Troubleshooting - 图2

The time to complete the phase depends on the user’s network speed and cluster resource utilization. We see failures in this phase due to network speed. If this happens, the user can start over the upgrade again.

We can also check the repository VM (named with the format upgrade-repo-hvst-xxxx) status and its corresponding pod:

  1. $ kubectl get vm -n harvester-system
  2. NAME AGE STATUS READY
  3. upgrade-repo-hvst-upgrade-9gmg2 101s Starting False
  4. $ kubectl get pods -n harvester-system | grep upgrade-repo-hvst
  5. virt-launcher-upgrade-repo-hvst-upgrade-9gmg2-4mnmq 1/1 Running 0 4m44s

Phase 2: Preload container images

The Harvester controller creates jobs on each Harvester node to download images from the repository VM and preload them. These are the container images required for the next release.

During this stage you can see the upgrade status windows shows:

Troubleshooting - 图3

It will take a while for all nodes to preload images. If the upgrade fails at this phase, the user can check job logs in the cattle-system namespace:

  1. $ kubectl get jobs -n cattle-system | grep prepare
  2. apply-hvst-upgrade-9gmg2-prepare-on-node1-with-2bbea1599a-f0e86 0/1 47s 47s
  3. apply-hvst-upgrade-9gmg2-prepare-on-node4-with-2bbea1599a-041e4 1/1 2m3s 2m50s
  4. $ kubectl logs jobs/apply-hvst-upgrade-9gmg2-prepare-on-node1-with-2bbea1599a-f0e86 -n cattle-system
  5. ...

It’s also safe to start over the upgrade if an upgrade fails at this phase.

Phase 3: Upgrade system services

Troubleshooting - 图4

In this phase, Harvester controller upgrades component Helm charts with a job. The user can check the apply-manifest job with the following command:

  1. $ kubectl get jobs -n harvester-system -l harvesterhci.io/upgradeComponent=manifest
  2. NAME COMPLETIONS DURATION AGE
  3. hvst-upgrade-9gmg2-apply-manifests 0/1 46s 46s
  4. $ kubectl logs jobs/hvst-upgrade-9gmg2-apply-manifests -n harvester-system
  5. ...

Phase 4: Upgrade nodes

Troubleshooting - 图5

The Harvester controller creates jobs on each node (one by one) to upgrade nodes’ OSes and RKE2 runtime. For multi-node clusters, there are two kinds of jobs to update a node:

  • pre-drain job: live-migrate or shutdown VMs on a node. When the job completes, the embedded Rancher service upgrades RKE2 runtime on a node.
  • post-drain job: upgrade OS and reboot.

For single-node clusters, there is only one single-node-upgrade type job for each node (named with the format hvst-upgrade-xxx-single-node-upgrade-<hostname>).

The user can check node jobs by:

  1. $ kubectl get jobs -n harvester-system -l harvesterhci.io/upgradeComponent=node
  2. NAME COMPLETIONS DURATION AGE
  3. hvst-upgrade-9gmg2-post-drain-node1 1/1 118s 6m34s
  4. hvst-upgrade-9gmg2-post-drain-node2 0/1 9s 9s
  5. hvst-upgrade-9gmg2-pre-drain-node1 1/1 3s 8m14s
  6. hvst-upgrade-9gmg2-pre-drain-node2 1/1 7s 85s
  7. $ kubectl logs -n harvester-system jobs/hvst-upgrade-9gmg2-post-drain-node2
  8. ...

Troubleshooting - 图6caution

Please do not start over an upgrade if the upgrade fails at this phase.

Phase 5: Clean-up

The Harvester controller deletes the upgrade repository VM and all files that are no longer needed.

Common operations

Start over an upgrade

  1. Log in to a control plane node.

  2. List Upgrade CRs in the cluster:

    1. # become root
    2. $ sudo -i
    3. # list the on-going upgrade
    4. $ kubectl get upgrade.harvesterhci.io -n harvester-system -l harvesterhci.io/latestUpgrade=true
    5. NAME AGE
    6. hvst-upgrade-9gmg2 10m
  3. Delete the Upgrade CR

    1. $ kubectl delete upgrade.harvesterhci.io/hvst-upgrade-9gmg2 -n harvester-system
  4. Click the upgrade button in the Harvester dashboard to start an upgrade again.

Download upgrade logs

We have designed and implemented a mechanism to automatically collect all the upgrade-related logs and display the upgrade procedure. By default, this is enabled. You can also choose to opt out of such behavior.

The "Enable Logging" checkbox on the upgrade confirmation dialog

You can click the Download Log button to download the log archive during an upgrade.

Download the upgrade log archive by clicking the "Download Log" button on the upgrade dialog

Log entries will be collected as files for each upgrade-related Pod, even for intermediate Pods. The support bundle provides a snapshot of the current state of the cluster, including logs and resource manifests, while the upgrade log preserves any logs generated during an upgrade. By combining these two, you can further investigate the issues during upgrades.

The upgrade log archive contains all the logs generated by the upgrade-related Pods

After the upgrade ended, Harvester stops collecting the upgrade logs to avoid occupying the disk space. In addition, you can click the Dismiss it button to purge the upgrade logs.

The upgrade log archive contains all the logs generated by the upgrade-related Pods

For more details, please refer to the upgrade log HEP.

Troubleshooting - 图11caution

The storage volume for storing upgrade-related logs is 1GB by default. If an upgrade went into issues, the logs may consume all the available space of the volume. To work around such kind of incidents, try the following steps:

  1. Detach the log-archive Volume by scaling down the fluentd StatefulSet and downloader Deployment.
  1. # Locate the StatefulSet and Deployment
  2. $ kubectl -n harvester-system get statefulsets -l harvesterhci.io/upgradeLogComponent=aggregator
  3. NAME READY AGE
  4. hvst-upgrade-xxxxx-upgradelog-infra-fluentd 1/1 43s
  5. $ kubectl -n harvester-system get deployments -l harvesterhci.io/upgradeLogComponent=downloader
  6. NAME READY UP-TO-DATE AVAILABLE AGE
  7. hvst-upgrade-xxxxx-upgradelog-downloader 1/1 1 1 38s
  8. # Scale down the resources to terminate any Pods using the volume
  9. $ kubectl -n harvester-system scale statefulset hvst-upgrade-xxxxx-upgradelog-infra-fluentd --replicas=0
  10. statefulset.apps/hvst-upgrade-xxxxx-upgradelog-infra-fluentd scaled
  11. $ kubectl -n harvester-system scale deployment hvst-upgrade-xxxxx-upgradelog-downloader --replicas=0
  12. deployment.apps/hvst-upgrade-xxxxx-upgradelog-downloader scaled
  1. Expand the volume size via Longhorn dashboard. For more details, please refer to the volume expansion guide.
  1. # Here's how to find out the actual name of the target volume
  2. $ kubectl -n harvester-system get pvc -l harvesterhci.io/upgradeLogComponent=log-archive -o jsonpath='{.items[].spec.volumeName}'
  3. pvc-63355afb-ce61-46c4-8781-377cf962278a
  1. Recover the fluentd StatefulSet and downloader Deployment.
  1. $ kubectl -n harvester-system scale statefulset hvst-upgrade-xxxxx-upgradelog-infra-fluentd --replicas=1
  2. statefulset.apps/hvst-upgrade-xxxxx-upgradelog-infra-fluentd scaled
  3. $ kubectl -n harvester-system scale deployment hvst-upgrade-xxxxx-upgradelog-downloader --replicas=1
  4. deployment.apps/hvst-upgrade-xxxxx-upgradelog-downloader scaled

Clean up unused images

The default value of imageGCHighThresholdPercent in KubeletConfiguration is 85. If kubelet detects disk usage is more than 85%, it tries to remove unused images.

During Harvester upgrade, the system loads new images to each node. If disk usage exceeds 85%, these new images may be marked for cleanup since they are not used by any containers. In an airgapped environment, this may break the upgrade because new images cannot be found in the cluster.

If you get error message like ‘Node xxx will reach xx.xx% storage space after loading new images. It’s higher than kubelet image garbage collection threshold 85%.’, you can run crictl rmi --prune to cleanup unused images first, before new upgrade starts.

Disk space not enough error message