Release v2.2.3
Important notes
This release fixes a critical issue affecting clusters with Monitoring feature turned on. The bug manifests in an infinite redeploy of a monitoring component which results in a cluster becoming unresponsive [20111]. As this bug will eventually affect anyone who has turned on monitoring, we have shipped this patch release to allow the impacted users to immediately update.
With this release, the following versions are now latest and stable:
|Type | Rancher Version | Docker Tag |Helm Repo| Helm Chart Version | |—|—|—|—|—| | Latest | v2.2.3 | rancher/rancher:latest
| server-charts/latest |v2.2.3 | | Stable | v2.2.3 | rancher/rancher:stable
| server-charts/stable | v2.2.3 |
Please review our version documentation for more details on versioning and tagging conventions.
Features and Enhancements
None
Major Bug Fixed Since v2.2.2
- Fixed an issue when cluster monitoring was constantly redeploying and filling up etcd of a user cluster with config maps resources, eventually making cluster’s API unresponsive [20111]
Other notes
Certificate expiry on Rancher provisioned clusters
In Rancher 2.0 and 2.1, the auto generated certificates for Rancher provisioned clusters have 1 year of expiry. It means if you created a Rancher provisioned cluster about 1 year ago, you need to rotate the certificates, otherwise the cluster will go into a bad state when the certificate expires. In Rancher 2.2.x, the rotation can be performed from Rancher UI, more details are here.
Additional Steps Required for Air Gap Installations and Upgrades
In this release, we’ve introduce a “system catalog” for managing microservices that Rancher deploys for certrain features such as Global DNS, Alerts, and Monitoring. These additional steps are documented as part of air gap installation instructions.
Known Major Issues
- Project monitoring is temporarily disabled [19771]
- Cluster monitoring gets redeployed multiple times if system-default-registry setting is configured. The workaround is posted as a comment to the issue [20202]
- Manual backups are deleted based on retention count configuration settings and recurring snapshot creation time is impacted by taking a manual snapshot. [#18807]
- If snapshotting is disabled, users cannot restore from existing backups or take manual backups. #18793
- Global DNS entries are not properly updated when a node that was hosting an associated ingress becomes unavailable. A records to the unavailable hosts will remain on the ingress and in the DNS entry. [#18932]
- Global DNS can’t be launched by a regular user; only admin can create it and then add a user to global dns as a member [19596]
- Deactivating or removing the creator of a cluster will prevent the monitoring feature from deploying successfully. In the case of the “local” cluster, this is the default admin account. [#18787]
- The monitoring feature reserves resources such as CPU and memory. It will fail to deploy if the cluster does not have sufficient resources available for reservation. See our documentation for the recommended resource reservations you should make when enabling monitoring. [#19649]
- Catalog app answers can get out of sync when switch between the UI and yaml forms[19060]
Versions
Images
- rancher/rancher:v2.2.3
- rancher/rancher-agent:v2.2.3
Tools
Kubernetes
Upgrades and Rollbacks
Rancher supports both upgrade and rollback starting with v2.0.2. Please note the version you would like to upgrade or rollback to change the Rancher version.
Due to the HA improvements introduced in the v2.1.0 release, the Rancher helm chart is the only supported method for installing or upgrading Rancher. Please use the Rancher helm chart to install HA Rancher. For details, see the HA Install - Installation Outline.
If you are currently using the RKE add-on install method, see Migrating from a RKE add-on install for details on how to move to using a helm chart.
Any upgrade from a version prior to v2.0.3, when scaling up workloads, new pods will be created [#14136] - In order to update scheduling rules for workloads [#13527], a new field was added to all workloads on update
, which will cause any pods in workloads from previous versions to re-create.
Note: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected. In the case of rolling back using a Rancher single-node install, you must specify the exact version you want to change the Rancher version to, rather than using the default :latest
tag.
Note: If you had the helm stable catalog enabled in v2.0.0, we’ve updated the catalog to start pointing directly to the Kubernetes helm repo instead of an internal repo. Please delete the custom catalog that is now showing up and re-enable the helm stable. [#13582]