Deploy CockroachDB On-Premises

This tutorial shows you how to manually deploy a secure multi-node CockroachDB cluster on multiple machines, using HAProxy load balancers to distribute client traffic.

If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select Insecure above for instructions.

Requirements

  • You must have CockroachDB installed locally. This is necessary for generating and managing your deployment's certificates.

  • You must have SSH access to each machine. This is necessary for distributing and starting CockroachDB binaries.

  • Your network configuration must allow TCP communication on the following ports:

    • 26257 for intra-cluster and client-cluster communication
    • 8080 to expose your Admin UI

Recommendations

  • If you plan to use CockroachDB in production, carefully review the Production Checklist.

  • Decide how you want to access your Admin UI:

Access LevelDescriptionPartially openSet a firewall rule to allow only specific IP addresses to communicate on port 8080.Completely openSet a firewall rule to allow all IP addresses to communicate on port 8080.Completely closedSet a firewall rule to disallow all communication on port 8080. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.

Step 1. Synchronize clocks

CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of synch with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.

ntpd should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well.

  • SSH to the first machine.

  • Disable timesyncd, which tends to be active by default on some Linux distributions:

  1. $ sudo timedatectl set-ntp no

Verify that timesyncd is off:

  1. $ timedatectl

Look for Network time on: no or NTP enabled: no in the output.

  • Install the ntp package:
  1. $ sudo apt-get install ntp
  • Stop the NTP daemon:
  1. $ sudo service ntp stop
  • Synch the machine's clock with Google's NTP service:
  1. $ sudo ntpd -b time.google.com

To make this change permanent, in the /etc/ntp.conf file, remove or comment out any lines starting with server or pool and add the following lines:

  1. server time1.google.com iburst
  2. server time2.google.com iburst
  3. server time3.google.com iburst
  4. server time4.google.com iburst

Restart the NTP daemon:

  1. $ sudo service ntp start

Note:
We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.

  • Verify that the machine is using a Google NTP server:
  1. $ sudo ntpq -p

The active NTP server will be marked with an asterisk.

  • Repeat these steps for each machine where a CockroachDB node will run.

Step 2. Generate certificates

You can use either cockroach cert commands or openssl commands to generate security certificates. This section features the cockroach cert commands.

Locally, you'll need to create the following certificates and keys:

  • A certificate authority (CA) key pair (ca.crt and ca.key).
  • A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers.
  • A client key pair for the root user. You'll use this to run a sample workload against the cluster as well as some cockroach client commands from your local machine.

Tip:
Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.

  1. $ mkdir certs
  1. $ mkdir my-safe-directory
  • certs: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes.
  • my-safe-directory: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes.
    • Create the CA certificate and key:
  1. $ cockroach cert create-ca \
  2. --certs-dir=certs \
  3. --ca-key=my-safe-directory/ca.key
  • Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
  1. $ cockroach cert create-node \
  2. <node1 internal IP address> \
  3. <node1 external IP address> \
  4. <node1 hostname> \
  5. <other common names for node1> \
  6. localhost \
  7. 127.0.0.1 \
  8. <load balancer IP address> \
  9. <load balancer hostname> \
  10. <other common names for load balancer instances> \
  11. --certs-dir=certs \
  12. --ca-key=my-safe-directory/ca.key
  • Upload certificates to the first node:
  1. # Create the certs directory:
  2. $ ssh <username>@<node1 address> "mkdir certs"
  1. # Upload the CA certificate and node certificate and key:
  2. $ scp certs/ca.crt \
  3. certs/node.crt \
  4. certs/node.key \
  5. <username>@<node1 address>:~/certs
  • Delete the local copy of the node certificate and key:
  1. $ rm certs/node.crt certs/node.key

Note:
This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the —overwrite flag.

  • Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
  1. $ cockroach cert create-node \
  2. <node2 internal IP address> \
  3. <node2 external IP address> \
  4. <node2 hostname> \
  5. <other common names for node2> \
  6. localhost \
  7. 127.0.0.1 \
  8. <load balancer IP address> \
  9. <load balancer hostname> \
  10. <other common names for load balancer instances> \
  11. --certs-dir=certs \
  12. --ca-key=my-safe-directory/ca.key
  • Upload certificates to the second node:
  1. # Create the certs directory:
  2. $ ssh <username>@<node2 address> "mkdir certs"
  1. # Upload the CA certificate and node certificate and key:
  2. $ scp certs/ca.crt \
  3. certs/node.crt \
  4. certs/node.key \
  5. <username>@<node2 address>:~/certs
  • Repeat steps 6 - 8 for each additional node.

  • Create a client certificate and key for the root user:

  1. $ cockroach cert create-client \
  2. root \
  3. --certs-dir=certs \
  4. --ca-key=my-safe-directory/ca.key
  • Upload certificates to the machine where you will run a sample workload:
  1. # Create the certs directory:
  2. $ ssh <username>@<workload address> "mkdir certs"
  1. # Upload the CA certificate and client certificate and key:
  2. $ scp certs/ca.crt \
  3. certs/client.root.crt \
  4. certs/client.root.key \
  5. <username>@<workload address>:~/certs

In later steps, you'll also use the root user's certificate to run cockroach client commands from your local machine. If you might also want to run cockroach client commands directly on a node (e.g., for local debugging), you'll need to copy the root user's certificate and key to that node as well.

Note:

On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by using a certificate issued by a public CA.

Step 3. Start nodes

You can start the nodes manually or automate the process using systemd.

For each initial node of your cluster, complete the following steps:

Note:
After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.

  • SSH to the machine where you want the node to run.

  • Download the CockroachDB archive for Linux, and extract the binary:

  1. $ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
  2. | tar xvz
  • Copy the binary into the PATH:
  1. $ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin

If you get a permissions error, prefix the command with sudo.

  1. $ cockroach start \
  2. --certs-dir=certs \
  3. --advertise-addr=<node1 address> \
  4. --join=<node1 address>,<node2 address>,<node3 address> \
  5. --cache=.25 \
  6. --max-sql-memory=.25 \
  7. --background

This command primes the node to start, using the following flags:

FlagDescription—certs-dirSpecifies the directory where you placed the ca.crt file and the node.crt and node.key files for the node.—advertise-addrSpecifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to 26257.This value must route to an IP address the node is listening on (with —listen-addr unspecified, the node listens on all IP addresses).In some networking scenarios, you may need to use —advertise-addr and/or —listen-addr differently. For more details, see Networking.—joinIdentifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.—cache—max-sql-memoryIncreases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see Cache and SQL Memory Size.—backgroundStarts the node in the background so you gain control of the terminal to issue more commands.

When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set —locality as well. It is also required to use certain enterprise features. For more details, see Locality.

For other flags not explicitly set, the command uses default values. For example, the node stores data in —store=cockroach-data and binds Admin UI HTTP requests to —http-addr=<node1 address>:8080. To set these options manually, see Start a Node.

  • Repeat these steps for each additional node that you want in your cluster.

For each initial node of your cluster, complete the following steps:

Note:
After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.

  • SSH to the machine where you want the node to run. Ensure you are logged in as the root user.

  • Download the CockroachDB archive for Linux, and extract the binary:

  1. $ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
  2. | tar xvz
  • Copy the binary into the PATH:
  1. $ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin

If you get a permissions error, prefix the command with sudo.

  • Create the Cockroach directory:
  1. $ mkdir /var/lib/cockroach
  • Create a Unix user named cockroach:
  1. $ useradd cockroach
  • Move the certs directory to the cockroach directory.
  1. $ mv certs /var/lib/cockroach/
  • Change the ownership of Cockroach directory to the user cockroach:
  1. $ chown -R cockroach.cockroach /var/lib/cockroach
  1. $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/prod-deployment/securecockroachdb.service

Alternatively, you can create the file yourself and copy the script into it:

  1. [Unit]
  2. Description=Cockroach Database cluster node
  3. Requires=network.target
  4. [Service]
  5. Type=notify
  6. WorkingDirectory=/var/lib/cockroach
  7. ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr=<node1 address> --join=<node1 address>,<node2 address>,<node3 address> --cache=.25 --max-sql-memory=.25
  8. TimeoutStopSec=60
  9. Restart=always
  10. RestartSec=10
  11. StandardOutput=syslog
  12. StandardError=syslog
  13. SyslogIdentifier=cockroach
  14. User=cockroach
  15. [Install]
  16. WantedBy=default.target
  • In the sample configuration template, specify values for the following flags:

FlagDescription—advertise-addrSpecifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to 26257.This value must route to an IP address the node is listening on (with —listen-addr unspecified, the node listens on all IP addresses).In some networking scenarios, you may need to use —advertise-addr and/or —listen-addr differently. For more details, see Networking.—joinIdentifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.

When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set —locality as well. It is also required to use certain enterprise features. For more details, see Locality.

For other flags not explicitly set, the command uses default values. For example, the node stores data in —store=cockroach-data and binds Admin UI HTTP requests to —http-addr=localhost:8080. To set these options manually, see Start a Node.

  • Start the CockroachDB cluster:
  1. $ systemctl start securecockroachdb
  • Repeat these steps for each additional node that you want in your cluster.

Note:

systemd handles node restarts in case of node failure. To stop a node without systemd restarting it, run systemctl stop securecockroachdb

Step 4. Initialize the cluster

On your local machine, run the cockroach init command to complete the node startup process and have them join together as a cluster:

  1. $ cockroach init --certs-dir=certs --host=<address of any node>

After running this command, each node prints helpful details to the standard output, such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.

Step 5. Test the cluster

CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.

To test this, use the built-in SQL client locally as follows:

  • On your local machine, launch the built-in SQL client:
  1. $ cockroach sql --certs-dir=certs --host=<address of any node>
  • Create a securenodetest database:
  1. > CREATE DATABASE securenodetest;
  • Use \q to exit the SQL shell.

  • Launch the built-in SQL client against a different node:

  1. $ cockroach sql --certs-dir=certs --host=<address of different node>
  • View the cluster's databases, which will include securenodetest:
  1. > SHOW DATABASES;
  1. +--------------------+
  2. | Database |
  3. +--------------------+
  4. | crdb_internal |
  5. | information_schema |
  6. | securenodetest |
  7. | pg_catalog |
  8. | system |
  9. +--------------------+
  10. (5 rows)
  • Use \q to exit the SQL shell.

Step 6. Set up HAProxy load balancers

Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:

  • Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).

  • Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.

Tip:
With a single load balancer, client connections are resilient to node failure, but the load balancer itself is a point of failure. It's therefore best to make load balancing resilient as well by using multiple load balancing instances, with a mechanism like floating IPs or DNS to select load balancers for clients.

HAProxy is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster, so we feature that tool here.

  • On your local machine, run the cockroach gen haproxy command with the —host flag set to the address of any node and security flags pointing to the CA cert and the client cert and key:
  1. $ cockroach gen haproxy \
  2. --certs-dir=certs \
  3. --host=<address of any node>

By default, the generated configuration file is called haproxy.cfg and looks as follows, with the server addresses pre-populated correctly:

  1. global
  2. maxconn 4096
  3. defaults
  4. mode tcp
  5. # Timeout values should be configured for your specific use.
  6. # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
  7. timeout connect 10s
  8. timeout client 1m
  9. timeout server 1m
  10. # TCP keep-alive on client side. Server already enables them.
  11. option clitcpka
  12. listen psql
  13. bind :26257
  14. mode tcp
  15. balance roundrobin
  16. option httpchk GET /health?ready=1
  17. server cockroach1 <node1 address>:26257 check port 8080
  18. server cockroach2 <node2 address>:26257 check port 8080
  19. server cockroach3 <node3 address>:26257 check port 8080

The file is preset with the minimal configurations needed to work with your running cluster:

FieldDescriptiontimeout connecttimeout clienttimeout serverTimeout values that should be suitable for most deployments.bindThe port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as 26257 is likely already being used by the CockroachDB node.balanceThe balancing algorithm. This is set to roundrobin to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the HAProxy Configuration Manual for details about this and other balancing algorithms.option httpchkThe HTTP endpoint that HAProxy uses to check node health. /health?ready=1 ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests.serverFor each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the —advertise-addr flag on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy.

Note:

For full details on these and other configuration settings, see the HAProxy Configuration Manual.

  • Upload the haproxy.cfg file to the machine where you want to run HAProxy:
  1. $ scp haproxy.cfg <username>@<haproxy address>:~/
  • SSH to the machine where you want to run HAProxy.

  • Install HAProxy:

  1. $ apt-get install haproxy
  • Start HAProxy, with the -f flag pointing to the haproxy.cfg file:
  1. $ haproxy -f haproxy.cfg
  • Repeat these steps for each additional instance of HAProxy you want to run.

Step 7. Run a sample workload

CockroachDB offers a pre-built workload binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the TPC-C workload.

Tip:
For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.

  • SSH to the machine where you want the run the sample TPC-C workload.

This should be a machine that is not running a CockroachDB node, and it should already have a certs directory containing ca.crt, client.root.crt, and client.root.key files.

  • Download workload and make it executable:
  1. $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST | chmod 755 workload.LATEST
  • Rename and copy workload into the PATH:
  1. $ cp -i workload.LATEST /usr/local/bin/workload
  • Start the TPC-C workload, pointing it at the IP address of the load balancer and the location of the ca.crt, client.root.crt, and client.root.key files:
  1. $ workload run tpcc \
  2. --drop \
  3. --init \
  4. --duration=20m \
  5. --tolerate-errors \
  6. "postgresql://root@<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"

This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.

Tip:
For more tpcc options, use workload run tpcc —help. For details about other load generators included in workload, use workload run —help.

  • To monitor the load generator's progress, open the Admin UI by pointing a browser to the address in the admin field in the standard output of any node on startup.

For each user who should have access to the Admin UI for a secure cluster, create a user with a password. On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords.

Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click Metrics on the left, select the SQL dashboard, and then check the SQL Connections graph. You can use the Graph menu to filter the graph for specific nodes.

Step 8. Set up monitoring and alerting

Despite CockroachDB's various built-in safeguards against failure, it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.

For details about available monitoring options and the most important events and metrics to alert on, see Monitoring and Alerting.

Step 9. Scale the cluster

You can start the nodes manually or automate the process using systemd.

For each additional node you want to add to the cluster, complete the following steps:

  • SSH to the machine where you want the node to run.

  • Download the CockroachDB archive for Linux, and extract the binary:

  1. $ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
  2. | tar xvz
  • Copy the binary into the PATH:
  1. $ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin

If you get a permissions error, prefix the command with sudo.

  1. $ cockroach start \
  2. --certs-dir=certs \
  3. --advertise-addr=<node4 address> \
  4. --locality=<key-value pairs> \
  5. --cache=.25 \
  6. --max-sql-memory=.25 \
  7. --join=<node1 address>,<node2 address>,<node3 address> \
  8. --background
  • Update your load balancer to recognize the new node.

For each additional node you want to add to the cluster, complete the following steps:

  • SSH to the machine where you want the node to run. Ensure you are logged in as the root user.

  • Download the CockroachDB archive for Linux, and extract the binary:

  1. $ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
  2. | tar xvz
  • Copy the binary into the PATH:
  1. $ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin

If you get a permissions error, prefix the command with sudo.

  • Create the Cockroach directory:
  1. $ mkdir /var/lib/cockroach
  • Create a Unix user named cockroach:
  1. $ useradd cockroach
  • Move the certs directory to the cockroach directory.
  1. $ mv certs /var/lib/cockroach/
  • Change the ownership of Cockroach directory to the user cockroach:
  1. $ chown -R cockroach.cockroach /var/lib/cockroach
  1. $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/prod-deployment/securecockroachdb.service

Alternatively, you can create the file yourself and copy the script into it:

  1. [Unit]
  2. Description=Cockroach Database cluster node
  3. Requires=network.target
  4. [Service]
  5. Type=notify
  6. WorkingDirectory=/var/lib/cockroach
  7. ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr=<node1 address> --join=<node1 address>,<node2 address>,<node3 address> --cache=.25 --max-sql-memory=.25
  8. TimeoutStopSec=60
  9. Restart=always
  10. RestartSec=10
  11. StandardOutput=syslog
  12. StandardError=syslog
  13. SyslogIdentifier=cockroach
  14. User=cockroach
  15. [Install]
  16. WantedBy=default.target

Save the file in the /etc/systemd/system/ directory.

  • Customize the sample configuration template for your deployment:

Specify values for the following flags in the sample configuration template:

FlagDescription—advertise-addrSpecifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to 26257.This value must route to an IP address the node is listening on (with —listen-addr unspecified, the node listens on all IP addresses).In some networking scenarios, you may need to use —advertise-addr and/or —listen-addr differently. For more details, see Networking.—joinIdentifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.

  • Repeat these steps for each additional node that you want in your cluster.

Step 10. Use the cluster

Now that your deployment is working, you can:

Warning:

When running a cluster of 5 nodes or more, it's safest to increase the replication factor for important internal data to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.

See also

Was this page helpful?
YesNo