Upgrade from v1.1.2 to v1.2.0 (not recommended)

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图1caution

Due to the known issues found v1.2.0:

We don’t recommend upgrading to v1.2.0. Please upgrade your v1.1.x cluster to v1.2.1.

General information

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图2tip

Before you start an upgrade, you can run the pre-check script to make sure the cluster is in a stable state. For more details, please visit this URL for the script.

Once there is an upgradable version, the Harvester GUI Dashboard page will show an upgrade button. For more details, please refer to start an upgrade.

For the air-gap env upgrade, please refer to prepare an air-gapped upgrade.

Known issues


1. An upgrade can’t start and reports "validator.harvesterhci.io" denied the request: managed chart rancher-monitoring is not ready, please wait for it to be ready

If a cluster is configured with a storage network, an upgrade can’t start with the following message.

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图3


2. An upgrade is stuck in Creating Upgrade Repository

During an upgrade, Creating Upgrade Repository is stuck in the Pending state:

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图4

Please perform the following steps to check if the cluster runs into the issue:

  1. Check the upgrade repository pod:

    Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图5

    If the virt-launcher-upgrade-repo-hvst-<upgrade-name> pod stays in ContainerCreating, your cluster might have run into this issue. In this case, proceed with step 2.

  2. Check the upgrade repository volume in the Longhorn GUI.

    1. Go to Longhorn GUI.

    2. Navigate to the Volume page.

    3. Check the upgrade repository VM volume. It should be attached to a pod called virt-launcher-upgrade-repo-hvst-<upgrade-name>. If one of the volume’s replicas stays in Stopped (gray color), the cluster is running into the issue.

      Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图6


3. An upgrade is stuck when pre-draining a node

Starting from v1.1.0, Harvester will wait for all volumes to become healthy (when node count >= 3) before upgrading a node. Generally, you can check volumes’ health if an upgrade is stuck in the “pre-draining” state.

Visit “Access Embedded Longhorn” to see how to access the embedded Longhorn GUI.

You can also check the pre-drain job logs. Please refer to Phase 4: Upgrade nodes in the troubleshooting guide.


4. An upgrade is stuck in upgrading the first node: Job was active longer than the specified deadline

An upgrade fails, as shown in the screenshot below:

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图7


5. An upgrade is stuck in the Pre-drained state

You might see an upgrade is stuck in the “pre-drained” state:

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图8

In this stage, Kubernetes is supposed to drain the workload on the node, but some reasons might cause the process to stall.

5.1 The node contains a Longhorn instance-manager-r pod that serves single-replica volume(s)

Longhorn doesn’t allow draining a node if the node contains the last surviving replica of a volume. To check if a node is running into this situation, follow these steps:

  1. List single-replica volumes with the command:

    1. kubectl get volumes.longhorn.io -A -o yaml | yq '.items[] | select(.spec.numberOfReplicas == 1) | .metadata.namespace + "/" + .metadata.name'

    For example:

    1. $ kubectl get volumes.longhorn.io -A -o yaml | yq '.items[] | select(.spec.numberOfReplicas == 1) | .metadata.namespace + "/" + .metadata.name'
    2. longhorn-system/pvc-d1f19bab-200e-483b-b348-c87cfbba85ab
  2. Check if the replica resides on the stuck node:

    List the NodeID of the volume’s replica with the command:

    1. kubectl get replicas.longhorn.io -n longhorn-system -o yaml | yq '.items[] | select(.metadata.labels.longhornvolume == "<volume>") | .spec.nodeID'

    For example:

    1. $ kubectl get replicas.longhorn.io -n longhorn-system -o yaml | yq '.items[] | select(.metadata.labels.longhornvolume == "pvc-d1f19bab-200e-483b-b348-c87cfbba85ab") | .spec.nodeID'
    2. node1

    If the result shows that the replica resides on the node where the upgrade is stuck (in this example, node1), your cluster is hitting this issue.

There are a couple of ways to address this situation. Choose the most appropriate method for your VM:

  1. Shut down the VM that uses the single-replica volume to detach the volume, allowing the upgrade to continue.
  2. Adjust the volumes’s replicas to more than one.
    1. Go to Longhorn GUI.
    2. Go to the Volume page.
    3. Locate the problematic volume and click the icon on the right side, then select Update Replicas Count: Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图9
    4. Increase the Number of Replicas and select OK.

5.2 Misconfigured Longhorn instance-manager-r Pod Disruption Budgets (PDB)

A misconfigured PDB could cause this issue. To check if that’s the case, perform the following steps:

  1. Assume the stuck node is harvester-node-1.

  2. Check the instance-manager-e or instance-manager-r pod names on the stuck node:

    1. $ kubectl get pods -n longhorn-system --field-selector spec.nodeName=harvester-node-1 | grep instance-manager
    2. instance-manager-r-d4ed2788 1/1 Running 0 3d8h

    The output above shows that the instance-manager-r-d4ed2788 pod is on the node.

  3. Check Rancher logs and verify that the instance-manager-e or instance-manager-r pod can’t be drained:

    1. $ kubectl logs deployment/rancher -n cattle-system
    2. ...
    3. 2023-03-28T17:10:52.199575910Z 2023/03/28 17:10:52 [INFO] [planner] rkecluster fleet-local/local: waiting: draining etcd node(s) custom-4f8cb698b24a,custom-a0f714579def
    4. 2023-03-28T17:10:55.034453029Z evicting pod longhorn-system/instance-manager-r-d4ed2788
    5. 2023-03-28T17:10:55.080933607Z error when evicting pods/"instance-manager-r-d4ed2788" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  4. Run the command to check if there is a PDB associated with the stuck node:

    1. $ kubectl get pdb -n longhorn-system -o yaml | yq '.items[] | select(.spec.selector.matchLabels."longhorn.io/node"=="harvester-node-1") | .metadata.name'
    2. instance-manager-r-466e3c7f
  5. Check the owner of the instance manager to this PDB:

    1. $ kubectl get instancemanager instance-manager-r-466e3c7f -n longhorn-system -o yaml | yq -e '.spec.nodeID'
    2. harvester-node-2

    If the output doesn’t match the stuck node (in this example output, harvester-node-2 doesn’t match the stuck node harvester-node-1), then we can conclude this issue happens.

  6. Before applying the workaround, check if all volumes are healthy:

    1. kubectl get volumes -n longhorn-system -o yaml | yq '.items[] | select(.status.state == "attached")| .status.robustness'

    The output should all be healthy. If this is not the case, you might want to uncordon nodes to make the volume healthy again.

  7. Remove the misconfigured PDB:

    1. kubectl delete pdb instance-manager-r-466e3c7f -n longhorn-system

5.3 The instance-manager-e pod could not be drained

During an upgrade, you might encounter an issue where you can’t drain the instance-manager-e pod. When this situation occurs, you will see error messages in the Rancher logs like the ones shown below:

  1. $ kubectl logs deployment/rancher -n cattle-system | grep "evicting pod"
  2. evicting pod longhorn-system/instance-manager-r-a06a43f3437ab4f643eea7053b915a80
  3. evicting pod longhorn-system/instance-manager-e-452e87d2
  4. error when evicting pods/"instance-manager-r-a06a43f3437ab4f643eea7053b915a80" -n "Longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  5. error when evicting pods/"instance-manager-e-452e87d2" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.

Check the instance-manager-e to see if any engine instances remain.

  1. $ kubectl get instancemanager instance-manager-e-452e87d2 -n longhorn-system -o yaml | yq -e ".status.instances"
  2. pvc-7b120d60-1577-4716-be5a-62348271025a-e-1cd53c57:
  3. spec:
  4. name: pvc-7b120d60-1577-4716-be5a-62348271025a-e-1cd53c57
  5. status:
  6. endpoint: ""
  7. errorMsg: ""
  8. listen: ""
  9. portEnd: 10001
  10. portStart: 10001
  11. resourceVersion: 0
  12. state: running
  13. type: ""

In this example, the instance-manager-e-452e87d2 still has an engine instance, so you can’t drain the pod.

You need to check the engine numbers to see if any engine number is redundant. Each PVC should only have one engine.

  1. # kubectl get engines -n longhorn-system -l longhornvolume=pvc-7b120d60-1577-4716-be5a-62348271025a
  2. NAME STATE NODE INSTANCEMANAGER IMAGE AGE
  3. pvc-76120d60-1577-4716-be5a-62348271025a-e-08220662 running harvester-qv4hd instance-manager-e-625d715e2f2e7065d64339f9b31407c2 longhornio/longhorn-engine:v1.4.3 2d12h
  4. pvc-7b120d60-1577-4716-be5a-62348271025a-e-lcd53c57 running harvester-lhlkv instance-manager-e-452e87d2 longhornio/longhorn-engine:v1.4.3 4d10h

The example above shows that two engines exist for the same PVC, which is a known issue in Longhorn #6642. To resolve this, delete the redundant engine to allow the upgrade to continue.

To determine which engine is the correct one, use the following command:

  1. $ kubectl get volumes pvc-7b120d60-1577-4716-be5a-62348271025a -n longhorn-system
  2. NAME STATE ROBUSTNESS SCHEDULED SIZE NODE AGE
  3. pvc-7b120d60-1577-4716-be5a-62348271025a attached healthy 42949672960 harvester-q4vhd 4d10h

In this example, the volume pvc-7b120d60-1577-4716-be5a-62348271025a is active on the node harvester-q4vhd, indicating that the engine not running on this node is redundant.

To make the engine inactive and trigger its automatic deletion by Longhorn, run the following command:

  1. $ kubectl patch engine pvc-7b120d60-1577-4716-be5a-62348271025a-e-lcd53c57 -n longhorn-system --type='json' -p='[{"op": "replace", "path": "/spec/active", "value": false}]'
  2. engine.longhorn.io/pvc-7b120d60-1577-4716-be5a-62348271025a-e-lcd53c57 patched

After a few seconds, you can verify the engine’s status:

  1. $ kubectl get engine -n longhorn-system|grep pvc-7b120d60-1577-4716-be5a-62348271025a
  2. pvc-7b120d60-1577-4716-be5a-62348271025a-e-08220b62 running harvester-q4vhd instance-manager-e-625d715e2f2e7065d64339f9631407c2 longhornio/longhorn-engine:v1.4.3 2d13h

The instance-manager-e pod should now drain successfully, allowing the upgrade to proceed.


6. An upgrade is stuck in the Upgrading System Service state

If you notice the upgrade is stuck in the Upgrading System Service state for a long period of time, you might need to investigate if the upgrade is stuck in the apply-manifests phase.

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图10

POD prometheus-rancher-monitoring-prometheus-0 is to be deleted

  1. Check the log of the apply-manifests pod to see if the following messages repeat.

    1. $ kubectl -n harvester-system logs hvst-upgrade-md6wr-apply-manifests-wqslg --tail=10
    2. Tue Sep 5 10:20:39 UTC 2023
    3. there are still 1 pods in cattle-monitoring-system to be deleted
    4. Tue Sep 5 10:20:45 UTC 2023
    5. there are still 1 pods in cattle-monitoring-system to be deleted
    6. Tue Sep 5 10:20:50 UTC 2023
    7. there are still 1 pods in cattle-monitoring-system to be deleted
    8. Tue Sep 5 10:20:55 UTC 2023
    9. there are still 1 pods in cattle-monitoring-system to be deleted
    10. Tue Sep 5 10:21:00 UTC 2023
    11. there are still 1 pods in cattle-monitoring-system to be deleted
  2. Check if the prometheus-rancher-monitoring-prometheus-0 pod is stuck with the status Terminating.

    1. $ kubectl -n cattle-monitoring-system get pods
    2. NAME READY STATUS RESTARTS AGE
    3. prometheus-rancher-monitoring-prometheus-0 0/3 Terminating 0 19d
  3. Find the UID of the terminating pod with the following command:

    1. $ kubectl -n cattle-monitoring-system get pod prometheus-rancher-monitoring-prometheus-0 -o jsonpath='{.metadata.uid}'
    2. 33f43165-6faa-4648-927d-69097901471c
  4. Get access to any node of the cluster via the console or SSH.

  5. Search for the related log messages in /var/lib/rancher/rke2/agent/logs/kubelet.log using the pod’s UID.

    1. E0905 10:26:18.769199 17399 reconciler.go:208] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"pvc-7781c988-c35b-4cf8-89e6-f2907ef33603\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-7781c988-c35b-4cf8-89e6-f2907ef33603\") pod \"33f43165-6faa-4648-927d-69097901471c\" (UID: \"33f43165-6faa-4648-927d-69097901471c\") : UnmountVolume.NewUnmounter failed for volume \"pvc-7781c988-c35b-4cf8-89e6-f2907ef33603\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-7781c988-c35b-4cf8-89e6-f2907ef33603\") pod \"33f43165-6faa-4648-927d-69097901471c\" (UID: \"33f43165-6faa-4648-927d-69097901471c\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/33f43165-6faa-4648-927d-69097901471c/volumes/kubernetes.io~csi/pvc-7781c988-c35b-4cf8-89e6-f2907ef33603/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/33f43165-6faa-4648-927d-69097901471c/volumes/kubernetes.io~csi/pvc-7781c988-c35b-4cf8-89e6-f2907ef33603/vol_data.json]: open /var/lib/kubelet/pods/33f43165-6faa-4648-927d-69097901471c/volumes/kubernetes.io~csi/pvc-7781c988-c35b-4cf8-89e6-f2907ef33603/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"pvc-7781c988-c35b-4cf8-89e6-f2907ef33603\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-7781c988-c35b-4cf8-89e6-f2907ef33603\") pod \"33f43165-6faa-4648-927d-69097901471c\" (UID: \"33f43165-6faa-4648-927d-69097901471c\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/33f43165-6faa-4648-927d-69097901471c/volumes/kubernetes.io~csi/pvc-7781c988-c35b-4cf8-89e6-f2907ef33603/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/33f43165-6faa-4648-927d-69097901471c/volumes/kubernetes.io~csi/pvc-7781c988-c35b-4cf8-89e6-f2907ef33603/vol_data.json]: open /var/lib/kubelet/pods/33f43165-6faa-4648-927d-69097901471c/volumes/kubernetes.io~csi/pvc-7781c988-c35b-4cf8-89e6-f2907ef33603/vol_data.json: no such file or directory"

    If kubelet continues to complain about the volume failing to unmount, apply the following workaround to allow the upgrade to proceed.

  6. Forcibly remove the pod stuck with the status Terminating with the following command:

    1. kubectl delete pod prometheus-rancher-monitoring-prometheus-0 -n cattle-monitoring-system --force

Multiple PODs in cattle-monitoring-system namespace are to be deleted

  1. Check the log of the apply-manifests pod to see if the following messages repeat.

    1. there are still 10 pods in cattle-monitoring-system to be deleted
    2. Fri Dec 8 19:06:56 UTC 2023
    3. there are still 10 pods in cattle-monitoring-system to be deleted
    4. Fri Dec 8 19:07:01 UTC 2023

    When it continues to show 10 (or other number) pods, it encounters below issue.

    1. The monitoring feature is deployed from the rancher-monitoring ManagedChart, in Harvester v1.2.0,v1.2.1,
    2. this ManagedChart is converted to Harvester Addon feature when upgrading.
    3. The ManagedChart rancher-monitoring is deleted, normally, all the generated resources including deployment,
    4. daemonset etc. will be deleted automatically. But in this case, those resources are not deleted.
    5. The above log reflects the result.
    6. Following instructions will guide to delete them manually.
  2. Locate the affected resources in the cattle-monitoring-system namespace.

    1. Root level resources in cattle-monitoring-system
    2. Customized CRD: Prometheus
    3. Object: rancher-monitoring-prometheus
    4. Sub-object: statefulset.apps/prometheus-rancher-monitoring-prometheus
    5. Customized CRD: Alertmanager
    6. object: rancher-monitoring-alertmanager
    7. Sub-object: statefulset.apps/alertmanager-rancher-monitoring-alertmanager
    8. Deployment:
    9. rancher-monitoring-grafana
    10. rancher-monitoring-kube-state-metrics
    11. rancher-monitoring-operator
    12. rancher-monitoring-prometheus-adapter
    13. Daemonset:
    14. rancher-monitoring-prometheus-node-exporter
  3. Delete the affected resources.

    1. Use below commands to delete them, meanwhile check the log of the `apply-manifests` until it does not
    2. report `there are still x pods in cattle-monitoring-system to be deleted`.
    3. kubectl delete prometheus rancher-monitoring-prometheus -n cattle-monitoring-system
    4. kubectl delete alertmanager rancher-monitoring-alertmanager -n cattle-monitoring-system
    5. kubectl delete deployment rancher-monitoring-grafana -n cattle-monitoring-system
    6. kubectl delete deployment rancher-monitoring-kube-state-metrics -n cattle-monitoring-system
    7. kubectl delete deployment rancher-monitoring-operator -n cattle-monitoring-system
    8. kubectl delete deployment rancher-monitoring-prometheus-adapter -n cattle-monitoring-system
    9. kubectl delete daemonset rancher-monitoring-prometheus-node-exporter -n cattle-monitoring-system

    Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图11note

    You may need to run some of the commands more than once to completely delete the resources.


7. Upgrade stuck in the Upgrading System Service state

If an upgrade is stuck in an Upgrading System Service state for an extended period, some system services’ certificates may have expired. To investigate and resolve this issue, follow these steps:

  1. Find the apply-manifest job’s name with the command:

    1. kubectl get jobs -n harvester-system -l harvesterhci.io/upgradeComponent=manifest

    Example output:

    1. NAME COMPLETIONS DURATION AGE
    2. hvst-upgrade-9gmg2-apply-manifests 0/1 46s 46s
  2. Check the job’s log with the command:

    1. kubectl logs jobs/hvst-upgrade-9gmg2-apply-manifests -n harvester-system

    If the following messages appear in the log, continue to the next step:

    1. Waiting for CAPI cluster fleet-local/local to be provisioned (current phase: Provisioning, current generation: 30259)...
    2. Waiting for CAPI cluster fleet-local/local to be provisioned (current phase: Provisioning, current generation: 30259)...
    3. Waiting for CAPI cluster fleet-local/local to be provisioned (current phase: Provisioning, current generation: 30259)...
    4. Waiting for CAPI cluster fleet-local/local to be provisioned (current phase: Provisioning, current generation: 30259)...
  3. Check CAPI cluster’s state with the command:

    1. kubectl get clusters.provisioning.cattle.io local -n fleet-local -o yaml

    If you see a condition similar to the one below, it’s likely that the cluster has encountered the issue:

    1. - lastUpdateTime: "2023-01-17T16:26:48Z"
    2. message: 'configuring bootstrap node(s) custom-24cb32ce8387: waiting for probes:
    3. kube-controller-manager, kube-scheduler'
    4. reason: Waiting
    5. status: Unknown
    6. type: Updated
  4. Find the machine’s hostname with the following command, and follow the workaround to see if service certificates expire on a node:

    1. kubectl get machines.cluster.x-k8s.io -n fleet-local <machine_name> -o yaml | yq .status.nodeRef.name

    Replace <machine_name> with the machine’s name from the output in the previous step.

    Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图12note

    If multiple nodes joined the cluster around the same time, you should perform the workaround on all those nodes.


8. The registry.suse.com/harvester-beta/vmdp:latest image is not available in air-gapped environment

Harvester does not package the registry.suse.com/harvester-beta/vmdp:latest image in the ISO file as of v1.1.0. For Windows VMs before v1.1.0, they used this image as a container disk. However, kubelet may remove old images to free up bytes. Windows VMs can’t access an air-gapped environment when this image is removed. You can fix this issue by changing the image to registry.suse.com/suse/vmdp/vmdp:2.5.4.2 and restarting the Windows VMs.


9. An Upgrade is stuck in the Post-draining state

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图13note

This known issue is fixed in v1.2.1.

The node might be stuck in the OS upgrade process if you encounter the Post-draining state, as shown below.

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图14

Harvester uses elemental upgrade to help us upgrade the OS. Check the elemental upgrade logs to see if there are any errors.

You can check the elemental upgrade logs with the following commands:

  1. # View the post-drain job, which should be named `hvst-upgrade-xxx-post-drain-xxx`
  2. $ kubectl get pod --selector=harvesterhci.io/upgradeJobType=post-drain -n harvester-system
  3. # Check the logs with the following command
  4. $ kubectl logs -n harvester-system pods/hvst-upgrade-xxx-post-drain-xxx

Suppose you see the following error in the logs. An incomplete state.yaml causes this issue.

  1. Flag --directory has been deprecated, 'directory' is deprecated please use 'system' instead
  2. INFO[2023-09-13T12:02:42Z] Starting elemental version 0.3.1
  3. INFO[2023-09-13T12:02:42Z] reading configuration form '/tmp/tmp.N6rn4F6mKM'
  4. ERRO[2023-09-13T12:02:42Z] Invalid upgrade command setup undefined state partition
  5. elemental upgrade failed with return code: 33
  6. + ret=33
  7. + '[' 33 '!=' 0 ']'
  8. + echo 'elemental upgrade failed with return code: 33'
  9. + cat /host/usr/local/upgrade_tmp/elemental-upgrade-20230913120242.log

In this case, Harvester upgrades the elemental-cli to the latest version. It will try to find the state partition from the state.yaml. If the state.yaml is incomplete, there is a chance it will fail to find the state partition.

The incomplete state.yaml will look like the following.

  1. # Autogenerated file by elemental client, do not edit
  2. date: "2023-09-13T08:31:42Z"
  3. state:
  4. # we are missing `label` here.
  5. active:
  6. source: dir:///tmp/tmp.01deNrXNEC
  7. label: COS_ACTIVE
  8. fs: ext2
  9. passive: null

Remove this incomplete state.yaml file to work around this issue. (The post-draining will retry every 10 minutes).

  1. Remount the state partition to RW.

    1. $ mount -o remount,rw /run/initramfs/cos-state
  2. Remove the state.yaml.

    1. $ rm -f /run/initramfs/cos-state/state.yaml
  3. Remount the state partition to RO.

    1. $ mount -o remount,ro /run/initramfs/cos-state

After performing the steps above, you should pass post-draining with the next retry.


10. An upgrade is stuck in the Upgrading System Service state due to the customer provided SSL certificate without IP SAN error in fleet-agent

Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图15note

This known issue is fixed in v1.2.1.

If an upgrade is stuck in an Upgrading System Service state for an extended period, follow these steps to investigate this issue:

  1. Find the pods related to the upgrade:

    1. kubectl get pods -A | grep upgrade

    Example output:

    1. # kubectl get pods -A | grep upgrade
    2. cattle-system system-upgrade-controller-5685d568ff-tkvxb 1/1 Running 0 85m
    3. harvester-system hvst-upgrade-vq4hl-apply-manifests-65vv8 1/1 Running 0 87m // waiting for managedchart to be ready
    4. ..
  2. The pod hvst-upgrade-vq4hl-apply-manifests-65vv8 has the following loop log:

    1. Current version: 102.0.0+up40.1.2, Current state: WaitApplied, Current generation: 23
    2. Sleep for 5 seconds to retry
  3. Check the status for all bundles. Note thata couple of bundles are OutOfSync:

    1. # kubectl get bundle -A
    2. NAMESPACE NAME BUNDLEDEPLOYMENTS-READY STATUS
    3. ...
    4. fleet-local mcc-local-managed-system-upgrade-controller 1/1
    5. fleet-local mcc-rancher-logging 0/1 OutOfSync(1) [Cluster fleet-local/local]
    6. fleet-local mcc-rancher-logging-crd 0/1 OutOfSync(1) [Cluster fleet-local/local]
    7. fleet-local mcc-rancher-monitoring 0/1 OutOfSync(1) [Cluster fleet-local/local]
    8. fleet-local mcc-rancher-monitoring-crd 0/1 WaitApplied(1) [Cluster fleet-local/local]
  4. The pod fleet-agent-* has following error log:

    1. fleet-agent pod log:
    2. time="2023-09-19T12:18:10Z" level=error msg="Failed to register agent: looking up secret cattle-fleet-local-system/fleet-agent-bootstrap: Post \"https://192.168.122.199/apis/fleet.cattle.io/ v1alpha1/namespaces/fleet-local/clusterregistrations\": tls: failed to verify certificate: x509: cannot validate certificate for 192.168.122.199 because it doesn't contain any IP SANs"
  5. Check the ssl-certificates settings in Harvester:

    From the command line:

    1. # kubectl get settings.harvesterhci.io ssl-certificates
    2. NAME VALUE
    3. ssl-certificates {"publicCertificate":"-----BEGIN CERTIFICATE-----\nMIIFNDCCAxygAwIBAgIUS7DoHthR/IR30+H/P0pv6HlfOZUwDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAwwLZXhhbXBsZS5j...."}

    From the Harvester Web UI:

    Upgrade from v1.1.2 to v1.2.0 (not recommended) - 图16

  6. Check the server-url setting, it is the value of VIP:

    1. # kubectl get settings.management.cattle.io -n cattle-system server-url
    2. NAME VALUE
    3. server-url https://192.168.122.199
  7. The root cause:

    User sets the self-signed ssl-certificates with FQDN in the Harvester settings, but the server-url points to the VIP, the fleet-agent pod fails to register.

    1. For example: create self-signed certificate for (*).example.com
    2. openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes \
    3. -keyout example.key -out example.crt -subj "/CN=example.com" \
    4. -addext "subjectAltName=DNS:example.com,DNS:*.example.com"
    5. The general outputs are: example.crt, example.key
  8. The workaround:

    Update server-url with the value of https://harv31.example.com

    1. # kubectl edit settings.management.cattle.io -n cattle-system server-url
    2. setting.management.cattle.io/server-url edited
    3. ...
    4. # kubectl get settings.management.cattle.io -n cattle-system server-url
    5. NAME VALUE
    6. server-url https://harv31.example.com

    After the workaround is applied, the fleet-agent pod is replaced by Rancher automatically and registers successfully, the upgrade continues.


11. An upgrade is denied due to managed chart rancher-monitoring-crd is not ready

When you start an upgrade and Harvester returns such an error message: admission webhook "validator.harvesterhci.io" denied the request: managed chart rancher-monitoring-crd is not ready, please wait for it to be ready. Please follow this troubleshooting.