Upgrade to CockroachDB v19.1
Because of CockroachDB's multi-active availability design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations.
Note:
This page shows you how to upgrade to the latest v19.1 release (v19.1.0) from v2.1.x, or from any patch release in the v19.1.x series. To upgrade within the v2.1.x series, see the v2.1 version of this page.
Step 1. Verify that you can upgrade
When upgrading, you can skip patch releases, but you cannot skip full releases. Therefore, if you are upgrading from v2.0.x to v19.1:
First upgrade to v2.1. Be sure to complete all the steps.
Then return to this page and perform a second rolling upgrade to v19.1.
If you are upgrading from v2.1.x or from any v19.1.x patch release, you do not have to go through intermediate releases; continue to step 2.
Step 2. Prepare to upgrade
Before starting the upgrade, complete the following steps.
Make sure your cluster is behind a load balancer, or your clients are configured to talk to multiple nodes. If your application communicates with a single node, stopping that node to upgrade its CockroachDB binary will cause your application to fail.
Verify the overall health of your cluster using the Admin UI. On the Cluster Overview:
- Under Node Status, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or decommission them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).
- Under Replication Status, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range under-replication and/or unavailability before beginning your upgrade.
- In the Node List:
- Make sure all nodes are on the same version. If any nodes are behind, upgrade them to the cluster's current version first, and then start this process over.
- Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to Metrics > Dashboard: Hardware and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider adding nodes to your cluster before beginning your upgrade.
Capture the cluster's current state by running the
cockroach debug zip
command against any node in the cluster. If the upgrade does not go according to plan, the captured details will help you and Cockroach Labs troubleshoot issues.Back up the cluster. If the upgrade does not go according to plan, you can use the data to restore your cluster to its previous state.
Step 3. Decide how the upgrade will be finalized
Note:
This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step.
By default, after all nodes are running the new version, the upgrade process will be auto-finalized. This will enable certain features and performance improvements introduced in v19.1. However, it will no longer be possible to perform a downgrade to v2.1. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade. For this reason, we recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in step 5:
Upgrade to v2.1, if you haven't already.
Start the
cockroach sql
shell against any node in the cluster.Set the
cluster.preserve_downgrade_option
cluster setting:
> SET CLUSTER SETTING cluster.preserve_downgrade_option = '2.1';
It is only possible to set this setting to the current cluster version.
Features that require upgrade finalization
When upgrading from v2.1 to v19.1, certain features and performance improvements will be enabled only after finalizing the upgrade, including but not limited to:
- Cascading replication zones: After finalization, replication zones will inherit empty values from their parent. For example, if the replication zone for a table is not explicitly set with
num_replicas
, it will inherit that value from its direct parent, whether that's the.default
replication zone from the entire cluster or the replication zone for the database containing the table. - Table statistics generation: After finalization, CockroachDB will generate table statistics automatically as tables are updated, and you will be able to manually generate table statistics using the
CREATE STATISTICS
statement. - Load-based splitting: After finalization, CockroachDB will automatically split frequently accessed keys into smaller ranges to optimize your cluster’s performance.
Step 4. Perform the rolling upgrade
For each node in your cluster, complete the following steps.
Tip:
We recommend creating scripts to perform these steps instead of performing them manually.
Warning:
Upgrade only one node at a time, and wait at least one minute after a node rejoins the cluster to upgrade the next node. Simultaneously upgrading more than one node increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability.
Connect to the node.
Stop the
cockroach
process.
Without a process manager like systemd
, use this command:
$ pkill cockroach
If you are using systemd
as the process manager, use this command to stop a node without systemd
restarting it:
$ systemctl stop <systemd config filename>
Then verify that the process has stopped:
$ ps aux | grep cockroach
Alternately, you can check the node's logs for the message server drained and shutdown completed
.
- Download and install the CockroachDB binary you want to use:
# Get the CockroachDB tarball:
$ curl -O https://binaries.cockroachdb.com/cockroach-v19.1.0.darwin-10.9-amd64.tgz
# Extract the binary:
$ tar xfz cockroach-v19.1.0.darwin-10.9-amd64.tgz
# Get the CockroachDB tarball:
$ wget https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz
# Extract the binary:
$ tar xfz cockroach-v19.1.0.linux-amd64.tgz
- If you use
cockroach
in your$PATH
, rename the outdatedcockroach
binary, and then move the new one into its place:
$ i="$(which cockroach)"; mv "$i" "$i"_old
$ cp -i cockroach-v19.1.0.darwin-10.9-amd64/cockroach /usr/local/bin/cockroach
$ i="$(which cockroach)"; mv "$i" "$i"_old
$ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin/cockroach
- Start the node to have it rejoin the cluster.
Without a process manager like systemd
, re-run the cockroach start
command that you used to start the node initially, for example:
$ cockroach start \
--certs-dir=certs \
--advertise-addr=<node address> \
--join=<node1 address>,<node2 address>,<node3 address>
If you are using systemd
as the process manager, run this command to start the node:
$ systemctl start <systemd config filename>
- Verify the node has rejoined the cluster through its output to
stdout
or through the Admin UI.
Note:
To access the Admin UI for a secure cluster, create a user with a password. Then open a browser and go to https://<any node's external IP address>:8080
. On accessing the Admin UI, you will see a Login screen, where you will need to enter your username and password.
- If you use
cockroach
in your$PATH
, you can remove the old binary:
$ rm /usr/local/bin/cockroach_old
If you leave versioned binaries on your servers, you do not need to do anything.
- Wait at least one minute after the node has rejoined the cluster, and then repeat these steps for the next node.
Step 5. Finish the upgrade
Note:
This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step.
If you disabled auto-finalization in step 3, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.
Once you are satisfied with the new version, re-enable auto-finalization:
- Start the
cockroach sql
shell against any node in the cluster. - Re-enable auto-finalization:
> RESET CLUSTER SETTING cluster.preserve_downgrade_option;
Step 6. Troubleshooting
After the upgrade has finalized (whether manually or automatically), it is no longer possible to downgrade to the previous release. If you are experiencing problems, we therefore recommend that you:
- Run the
cockroach debug zip
command against any node in the cluster to capture your cluster's state. - Reach out for support from Cockroach Labs, sharing your debug zip.
In the event of catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade.