- Deploy CockroachDB on Microsoft Azure
- Requirements
- Recommendations
- Step 1. Configure your network
- Step 2. Create VMs
- Step 3. Synchronize clocks
- Step 4. Set up load balancing
- Step 5. Generate certificates
- Step 6. Start nodes
- Step 7. Initialize the cluster
- Step 8. Test the cluster
- Step 9. Run a sample workload
- Step 10. Set up monitoring and alerting
- Step 11. Scale the cluster
- Step 12. Use the database
- See also
Deploy CockroachDB on Microsoft Azure
This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Microsoft Azure, using Azure's managed load balancing service to distribute client traffic.
If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select Insecure above for instructions.
Requirements
You must have CockroachDB installed locally. This is necessary for generating and managing your deployment's certificates.
You must have SSH access to each machine. This is necessary for distributing and starting CockroachDB binaries.
Your network configuration must allow TCP communication on the following ports:
26257
for intra-cluster and client-cluster communication8080
to expose your Admin UI
Recommendations
If you plan to use CockroachDB in production, carefully review the Production Checklist.
Decide how you want to access your Admin UI:
Access LevelDescriptionPartially openSet a firewall rule to allow only specific IP addresses to communicate on port 8080
.Completely openSet a firewall rule to allow all IP addresses to communicate on port 8080
.Completely closedSet a firewall rule to disallow all communication on port 8080
. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
Step 1. Configure your network
CockroachDB requires TCP communication on two ports:
- 26257 (
tcp:26257
) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes 8080 (
tcp:8080
) for exposing your Admin UI
To enable this in Azure, you must create a Resource Group, Virtual Network, and Network Security Group.- Create a Virtual Network that uses your Resource Group.
Create a Network Security Group that uses your Resource Group, and then add the following inbound rules to it:
- Admin UI support:
FieldRecommended ValueNamecockroachadminSourceIP AddressesSource IP addresses/CIDR rangesYour local network’s IP rangesSource port rangesDestinationAnyDestination port range8080ProtocolTCPAction*AllowPriorityAny value > 1000
- Application support:
Tip:
If your application is also hosted on the same Azure Virtual Network, you will not need to create a firewall rule for your application to communicate with your load balancer.
FieldRecommended ValueNamecockroachappSourceIP AddressesSource IP addresses/CIDR rangesYour local network’s IP rangesSource port rangesDestinationAnyDestination port range26257ProtocolTCPAction*AllowPriorityAny value > 1000
Step 2. Create VMs
Create Linux VMs for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate VM for that workload.
Run at least 3 nodes to ensure survivability.
Use storage-optimized Ls-series VMs with Premium Storage or local SSD storage with a Linux filesystem such as
ext4
(not the Windowsntfs
filesystem). For example, Cockroach Labs has usedStandard_L4s
VMs (4 vCPUs and 32 GiB of RAM per VM) for internal testing.- If you choose local SSD storage, on reboot, the VM can come back with the
ntfs
filesystem. Be sure your automation monitors for this and reformats the disk to the Linux filesystem you chose initially.
- If you choose local SSD storage, on reboot, the VM can come back with the
Do not use "burstable" B-series VMs, which limit the load on a single core. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well.
When creating the VMs, make sure to select the Resource Group, Virtual Network, and Network Security Group you created.
For more details, see Hardware Recommendations and Cluster Topology.
Step 3. Synchronize clocks
CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of synch with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.
ntpd
should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run ntpd
properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems.
SSH to the first machine.
Find the ID of the Hyper-V Time Synchronization device:
$ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus
$ python lsvmbus -vv | grep -w "Time Synchronization" -A 3
VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization]
Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee}
Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee
Rel_ID=12, target_cpu=0
- Unbind the device, using the
Device_ID
from the previous command's output:
$ echo <DEVICE_ID> | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind
- Install the
ntp
package:
$ sudo apt-get install ntp
- Stop the NTP daemon:
$ sudo service ntp stop
- Synch the machine's clock with Google's NTP service:
$ sudo ntpd -b time.google.com
To make this change permanent, in the /etc/ntp.conf
file, remove or comment out any lines starting with server
or pool
and add the following lines:
server time1.google.com iburst
server time2.google.com iburst
server time3.google.com iburst
server time4.google.com iburst
Restart the NTP daemon:
$ sudo service ntp start
Note:
We recommend Google's NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.
- Verify that the machine is using a Google NTP server:
$ sudo ntpq -p
The active NTP server will be marked with an asterisk.
- Repeat these steps for each machine where a CockroachDB node will run.
Step 4. Set up load balancing
Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:
Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).
Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.
Microsoft Azure offers fully-managed load balancing to distribute traffic between instances.
Add Azure load balancing. Be sure to:
- Set forwarding rules to route TCP traffic from the load balancer's port 26257 to port 26257 on the nodes.
- Configure health checks to use HTTP port 8080 and path
/health?ready=1
. This health endpoint ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests.
- Note the provisioned IP Address for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster.
Note:
If you would prefer to use HAProxy instead of Azure's managed load balancing, see the On-Premises tutorial for guidance.
Step 5. Generate certificates
You can use either cockroach cert
commands or openssl
commands to generate security certificates. This section features the cockroach cert
commands.
Locally, you'll need to create the following certificates and keys:
- A certificate authority (CA) key pair (
ca.crt
andca.key
). - A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers.
- A client key pair for the
root
user. You'll use this to run a sample workload against the cluster as well as somecockroach
client commands from your local machine.
Tip:
Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.
Install CockroachDB on your local machine, if you haven't already.
Create two directories:
$ mkdir certs
$ mkdir my-safe-directory
certs
: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes.my-safe-directory
: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes.- Create the CA certificate and key:
$ cockroach cert create-ca \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
- Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
$ cockroach cert create-node \
<node1 internal IP address> \
<node1 external IP address> \
<node1 hostname> \
<other common names for node1> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
- Upload certificates to the first node:
# Create the certs directory:
$ ssh <username>@<node1 address> "mkdir certs"
# Upload the CA certificate and node certificate and key:
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node1 address>:~/certs
- Delete the local copy of the node certificate and key:
$ rm certs/node.crt certs/node.key
Note:
This is necessary because the certificates and keys for additional nodes will also be named node.crt
and node.key
As an alternative to deleting these files, you can run the next cockroach cert create-node
commands with the —overwrite
flag.
- Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
$ cockroach cert create-node \
<node2 internal IP address> \
<node2 external IP address> \
<node2 hostname> \
<other common names for node2> \
localhost \
127.0.0.1 \
<load balancer IP address> \
<load balancer hostname> \
<other common names for load balancer instances> \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
- Upload certificates to the second node:
# Create the certs directory:
$ ssh <username>@<node2 address> "mkdir certs"
# Upload the CA certificate and node certificate and key:
$ scp certs/ca.crt \
certs/node.crt \
certs/node.key \
<username>@<node2 address>:~/certs
Repeat steps 6 - 8 for each additional node.
Create a client certificate and key for the
root
user:
$ cockroach cert create-client \
root \
--certs-dir=certs \
--ca-key=my-safe-directory/ca.key
- Upload certificates to the machine where you will run a sample workload:
# Create the certs directory:
$ ssh <username>@<workload address> "mkdir certs"
# Upload the CA certificate and client certificate and key:
$ scp certs/ca.crt \
certs/client.root.crt \
certs/client.root.key \
<username>@<workload address>:~/certs
In later steps, you'll also use the root
user's certificate to run cockroach
client commands from your local machine. If you might also want to run cockroach
client commands directly on a node (e.g., for local debugging), you'll need to copy the root
user's certificate and key to that node as well.
Note:
On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by using a certificate issued by a public CA.
Step 6. Start nodes
You can start the nodes manually or automate the process using systemd.
For each initial node of your cluster, complete the following steps:
Note:
After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
SSH to the machine where you want the node to run.
Download the CockroachDB archive for Linux, and extract the binary:
$ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
| tar xvz
- Copy the binary into the
PATH
:
$ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin
If you get a permissions error, prefix the command with sudo
.
- Run the
cockroach start
command:
$ cockroach start \
--certs-dir=certs \
--advertise-addr=<node1 address> \
--join=<node1 address>,<node2 address>,<node3 address> \
--cache=.25 \
--max-sql-memory=.25 \
--background
This command primes the node to start, using the following flags:
FlagDescription—certs-dir
Specifies the directory where you placed the ca.crt
file and the node.crt
and node.key
files for the node.—advertise-addr
Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to 26257
.This value must route to an IP address the node is listening on (with —listen-addr
unspecified, the node listens on all IP addresses).In some networking scenarios, you may need to use —advertise-addr
and/or —listen-addr
differently. For more details, see Networking.—join
Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.—cache
—max-sql-memory
Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see Cache and SQL Memory Size.—background
Starts the node in the background so you gain control of the terminal to issue more commands.
When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set —locality
as well. It is also required to use certain enterprise features. For more details, see Locality.
For other flags not explicitly set, the command uses default values. For example, the node stores data in —store=cockroach-data
and binds Admin UI HTTP requests to —http-addr=<node1 address>:8080
. To set these options manually, see Start a Node.
- Repeat these steps for each additional node that you want in your cluster.
For each initial node of your cluster, complete the following steps:
Note:
After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
SSH to the machine where you want the node to run. Ensure you are logged in as the
root
user.Download the CockroachDB archive for Linux, and extract the binary:
$ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
| tar xvz
- Copy the binary into the
PATH
:
$ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin
If you get a permissions error, prefix the command with sudo
.
- Create the Cockroach directory:
$ mkdir /var/lib/cockroach
- Create a Unix user named
cockroach
:
$ useradd cockroach
- Move the
certs
directory to thecockroach
directory.
$ mv certs /var/lib/cockroach/
- Change the ownership of
Cockroach
directory to the usercockroach
:
$ chown -R cockroach.cockroach /var/lib/cockroach
- Download the sample configuration template and save the file in the
/etc/systemd/system/
directory:
$ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/prod-deployment/securecockroachdb.service
Alternatively, you can create the file yourself and copy the script into it:
[Unit]
Description=Cockroach Database cluster node
Requires=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/cockroach
ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr=<node1 address> --join=<node1 address>,<node2 address>,<node3 address> --cache=.25 --max-sql-memory=.25
TimeoutStopSec=60
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=cockroach
User=cockroach
[Install]
WantedBy=default.target
- In the sample configuration template, specify values for the following flags:
FlagDescription—advertise-addr
Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to 26257
.This value must route to an IP address the node is listening on (with —listen-addr
unspecified, the node listens on all IP addresses).In some networking scenarios, you may need to use —advertise-addr
and/or —listen-addr
differently. For more details, see Networking.—join
Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set —locality
as well. It is also required to use certain enterprise features. For more details, see Locality.
For other flags not explicitly set, the command uses default values. For example, the node stores data in —store=cockroach-data
and binds Admin UI HTTP requests to —http-addr=localhost:8080
. To set these options manually, see Start a Node.
- Start the CockroachDB cluster:
$ systemctl start securecockroachdb
- Repeat these steps for each additional node that you want in your cluster.
Note:
systemd
handles node restarts in case of node failure. To stop a node without systemd
restarting it, run systemctl stop securecockroachdb
Step 7. Initialize the cluster
On your local machine, run the cockroach init
command to complete the node startup process and have them join together as a cluster:
$ cockroach init --certs-dir=certs --host=<address of any node>
After running this command, each node prints helpful details to the standard output, such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
Step 8. Test the cluster
CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.
To test this, use the built-in SQL client locally as follows:
- On your local machine, launch the built-in SQL client:
$ cockroach sql --certs-dir=certs --host=<address of any node>
- Create a
securenodetest
database:
> CREATE DATABASE securenodetest;
Use
\q
to exit the SQL shell.Launch the built-in SQL client against a different node:
$ cockroach sql --certs-dir=certs --host=<address of different node>
- View the cluster's databases, which will include
securenodetest
:
> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| crdb_internal |
| information_schema |
| securenodetest |
| pg_catalog |
| system |
+--------------------+
(5 rows)
- Use
\q
to exit the SQL shell.
Step 9. Run a sample workload
CockroachDB offers a pre-built workload
binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the TPC-C workload.
Tip:
For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.
- SSH to the machine where you want the run the sample TPC-C workload.
This should be a machine that is not running a CockroachDB node, and it should already have a certs
directory containing ca.crt
, client.root.crt
, and client.root.key
files.
- Download
workload
and make it executable:
$ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST | chmod 755 workload.LATEST
- Rename and copy
workload
into thePATH
:
$ cp -i workload.LATEST /usr/local/bin/workload
- Start the TPC-C workload, pointing it at the IP address of the load balancer and the location of the
ca.crt
,client.root.crt
, andclient.root.key
files:
$ workload run tpcc \
--drop \
--init \
--duration=20m \
--tolerate-errors \
"postgresql://root@<IP ADDRESS OF LOAD BALANCER:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key"
This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries.
Tip:
For more tpcc
options, use workload run tpcc —help
. For details about other load generators included in workload
, use workload run —help
.
- To monitor the load generator's progress, open the Admin UI by pointing a browser to the address in the
admin
field in the standard output of any node on startup.
For each user who should have access to the Admin UI for a secure cluster, create a user with a password. On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords.
Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click Metrics on the left, select the SQL dashboard, and then check the SQL Connections graph. You can use the Graph menu to filter the graph for specific nodes.
Step 10. Set up monitoring and alerting
Despite CockroachDB's various built-in safeguards against failure, it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
For details about available monitoring options and the most important events and metrics to alert on, see Monitoring and Alerting.
Step 11. Scale the cluster
You can start the nodes manually or automate the process using systemd.
For each additional node you want to add to the cluster, complete the following steps:
SSH to the machine where you want the node to run.
Download the CockroachDB archive for Linux, and extract the binary:
$ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
| tar xvz
- Copy the binary into the
PATH
:
$ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin
If you get a permissions error, prefix the command with sudo
.
- Run the
cockroach start
command just like you did for the initial nodes:
$ cockroach start \
--certs-dir=certs \
--advertise-addr=<node4 address> \
--locality=<key-value pairs> \
--cache=.25 \
--max-sql-memory=.25 \
--join=<node1 address>,<node2 address>,<node3 address> \
--background
- Update your load balancer to recognize the new node.
For each additional node you want to add to the cluster, complete the following steps:
SSH to the machine where you want the node to run. Ensure you are logged in as the
root
user.Download the CockroachDB archive for Linux, and extract the binary:
$ wget -qO- https://binaries.cockroachdb.com/cockroach-v19.1.0.linux-amd64.tgz \
| tar xvz
- Copy the binary into the
PATH
:
$ cp -i cockroach-v19.1.0.linux-amd64/cockroach /usr/local/bin
If you get a permissions error, prefix the command with sudo
.
- Create the Cockroach directory:
$ mkdir /var/lib/cockroach
- Create a Unix user named
cockroach
:
$ useradd cockroach
- Move the
certs
directory to thecockroach
directory.
$ mv certs /var/lib/cockroach/
- Change the ownership of
Cockroach
directory to the usercockroach
:
$ chown -R cockroach.cockroach /var/lib/cockroach
- Download the sample configuration template:
$ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/prod-deployment/securecockroachdb.service
Alternatively, you can create the file yourself and copy the script into it:
[Unit]
Description=Cockroach Database cluster node
Requires=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/cockroach
ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr=<node1 address> --join=<node1 address>,<node2 address>,<node3 address> --cache=.25 --max-sql-memory=.25
TimeoutStopSec=60
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=cockroach
User=cockroach
[Install]
WantedBy=default.target
Save the file in the /etc/systemd/system/
directory.
- Customize the sample configuration template for your deployment:
Specify values for the following flags in the sample configuration template:
FlagDescription—advertise-addr
Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to 26257
.This value must route to an IP address the node is listening on (with —listen-addr
unspecified, the node listens on all IP addresses).In some networking scenarios, you may need to use —advertise-addr
and/or —listen-addr
differently. For more details, see Networking.—join
Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
- Repeat these steps for each additional node that you want in your cluster.
Step 12. Use the database
Now that your deployment is working, you can:
- Implement your data model.
- Create users and grant them privileges.
- Connect your application. Be sure to connect your application to the load balancer, not to a CockroachDB node.
You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see Configure Replication Zones.
Warning:
When running a cluster of 5 nodes or more, it's safest to increase the replication factor for important internal data to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.