Orchestrate a Local Cluster with Kubernetes
On top of CockroachDB's built-in automation, you can use a third-party orchestration system to simplify and automate even more of your operations, from deployment to scaling to overall cluster management.
This page walks you through a simple demonstration, using the open-source Kubernetes orchestration system. Using either the CockroachDB Helm chart or a few configuration files, you'll quickly create a 3-node local cluster. You'll run some SQL commands against the cluster and then simulate node failure, watching how Kubernetes auto-restarts without the need for any manual intervention. You'll then scale the cluster with a single command before shutting the cluster down, again with a single command.
Note:
To orchestrate a physically distributed cluster in production, see Orchestrated Deployments.
Before you begin
Before getting started, it's helpful to review some Kubernetes-specific terminology:
Feature | Description |
---|---|
minikube | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. |
pod | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. |
StatefulSet | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. |
persistent volume | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.When using minikube , persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. |
persistent volume claim | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. |
Step 1. Start Kubernetes
- Follow Kubernetes' documentation to install
minikube
, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor andkubectl
, the command-line tool used to managed Kubernetes from your local workstation.
Note:
Make sure you install minikube
version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability
field and PodDisruptionBudget
resource type used in the CockroachDB StatefulSet configuration.
- Start a local Kubernetes cluster:
$ minikube start
Step 2. Start CockroachDB
To start your CockroachDB cluster, you can either use our StatefulSet configuration and related files directly, or you can use the Helm package manager for Kubernetes to simplify the process.
Note:
If you want to use a different certificate authority than the one Kubernetes uses, or if your Kubernetes cluster doesn't fully support certificate-signing requests (e.g., in Amazon EKS), use these configuration files instead of the ones referenced below.
- From your local workstation, use our
cockroachdb-statefulset-secure.yaml
file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
$ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml
serviceaccount "cockroachdb" created
role "cockroachdb" created
clusterrole "cockroachdb" created
rolebinding "cockroachdb" created
clusterrolebinding "cockroachdb" created
service "cockroachdb-public" created
service "cockroachdb" created
poddisruptionbudget "cockroachdb-budget" created
statefulset "cockroachdb" created
Alternatively, if you'd rather start with a configuration file that has been customized for performance:
$ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml
Modify the file wherever there is a
TODO
comment.Use the file to create the StatefulSet and start the cluster:
$ kubectl create -f cockroachdb-statefulset-secure.yaml
As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
- Get the name of the
Pending
CSR for the first pod:
- Get the name of the
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
default.node.cockroachdb-0 1m system:serviceaccount:default:default Pending
node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 4m kubelet Approved,Issued
node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 4m kubelet Approved,Issued
node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 5m kubelet Approved,Issued
If you do not see a Pending
CSR, wait a minute and try again.
- Examine the CSR for the first pod:
$ kubectl describe csr default.node.cockroachdb-0
Name: default.node.cockroachdb-0
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
Requesting User: system:serviceaccount:default:default
Status: Pending
Subject:
Common Name: node
Serial Number:
Organization: Cockroach
Subject Alternative Names:
DNS Names: localhost
cockroachdb-0.cockroachdb.default.svc.cluster.local
cockroachdb-public
IP Addresses: 127.0.0.1
10.48.1.6
Events: <none>
- If everything looks correct, approve the CSR for the first pod:
$ kubectl certificate approve default.node.cockroachdb-0
certificatesigningrequest "default.node.cockroachdb-0" approved
- Repeat steps 1-3 for the other 2 pods.
Initialize the cluster:
- Confirm that three pods are
Running
successfully. Note that they will notbe consideredReady
until after the cluster has been initialized:
- Confirm that three pods are
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cockroachdb-0 0/1 Running 0 2m
cockroachdb-1 0/1 Running 0 2m
cockroachdb-2 0/1 Running 0 2m
- Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
$ kubectl get persistentvolumes
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- Use our
cluster-init-secure.yaml
file to perform a one-time initialization that joins the nodes into a single cluster:
$ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml
job "cluster-init-secure" created
- Approve the CSR for the one-off pod from which cluster initialization happens:
$ kubectl certificate approve default.client.root
certificatesigningrequest "default.client.root" approved
- Confirm that cluster initialization has completed successfully. The jobshould be considered successful and the CockroachDB pods should soon beconsidered
Ready
:
$ kubectl get job cluster-init-secure
NAME DESIRED SUCCESSFUL AGE
cluster-init-secure 1 1 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cockroachdb-0 1/1 Running 0 3m
cockroachdb-1 1/1 Running 0 3m
cockroachdb-2 1/1 Running 0 3m
Tip:
The StatefulSet configuration sets all CockroachDB nodes to log to stderr
, so if you ever need access to a pod/node's logs to troubleshoot, use kubectl logs <podname>
rather than checking the log on the persistent volume.
In the likely case that your Kubernetes cluster uses RBAC (e.g., if you are using GKE), you need to create RBAC resources to grant Tiller access to the Kubernetes API:
- Create a
rbac-config.yaml
file to define a role and service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
- Create the service account:
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
- Start the Helm server:
$ helm init --service-account tiller
- Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart and setting the
Secure.Enabled
parameter totrue
:
Note:
This tutorial uses my-release
as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
$ helm install --name my-release --set Secure.Enabled=true stable/cockroachdb
Behind the scenes, this command uses our cockroachdb-statefulset.yaml
file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
Note:
You can customize your deployment by passing additional configuration parameters to helm install
using the —set key=value[,key=value]
flag. For a production cluster, you should consider modifying the Storage
and StorageClass
parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume StorageClass
in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD).
As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
- Get the name of the
Pending
CSR for the first pod:
- Get the name of the
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
default.client.root 21s system:serviceaccount:default:my-release-cockroachdb Pending
default.node.my-release-cockroachdb-0 15s system:serviceaccount:default:my-release-cockroachdb Pending
default.node.my-release-cockroachdb-1 16s system:serviceaccount:default:my-release-cockroachdb Pending
default.node.my-release-cockroachdb-2 15s system:serviceaccount:default:my-release-cockroachdb Pending
If you do not see a Pending
CSR, wait a minute and try again.
- Examine the CSR for the first pod:
$ kubectl describe csr default.node.my-release-cockroachdb-0
Name: default.node.my-release-cockroachdb-0
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 10 Dec 2018 05:36:35 -0500
Requesting User: system:serviceaccount:default:my-release-cockroachdb
Status: Pending
Subject:
Common Name: node
Serial Number:
Organization: Cockroach
Subject Alternative Names:
DNS Names: localhost
my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local
my-release-cockroachdb-0.my-release-cockroachdb
my-release-cockroachdb-public
my-release-cockroachdb-public.default.svc.cluster.local
IP Addresses: 127.0.0.1
10.48.1.6
Events: <none>
- If everything looks correct, approve the CSR for the first pod:
$ kubectl certificate approve default.node.my-release-cockroachdb-0
certificatesigningrequest "default.node.my-release-cockroachdb-0" approved
- Repeat steps 1-3 for the other 2 pods.
- Confirm that three pods are
Running
successfully:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-0 0/1 Running 0 6m
my-release-cockroachdb-1 0/1 Running 0 6m
my-release-cockroachdb-2 0/1 Running 0 6m
my-release-cockroachdb-init-hxzsc 0/1 Init:0/1 0 6m
- Approve the CSR for the one-off pod from which cluster initialization happens:
$ kubectl certificate approve default.client.root
certificatesigningrequest "default.client.root" approved
- Confirm that cluster initialization has completed successfully, with each pod showing
1/1
underREADY
:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-0 1/1 Running 0 8m
my-release-cockroachdb-1 1/1 Running 0 8m
my-release-cockroachdb-2 1/1 Running 0 8m
- Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
$ kubectl get persistentvolumes
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
Tip:
The StatefulSet configuration sets all CockroachDB nodes to log to stderr
, so if you ever need access to a pod/node's logs to troubleshoot, use kubectl logs <podname>
rather than checking the log on the persistent volume.
Step 3. Use the built-in SQL client
To use the built-in SQL client, you need to launch a pod that runs indefinitely with the cockroach
binary inside it, get a shell into the pod, and then start the built-in SQL client.
- From your local workstation, use our
client-secure.yaml
file to launch a pod and keep it running indefinitely:
$ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
pod "cockroachdb-client-secure" created
The pod uses the root
client certificate created earlier to initialize the cluster, so there's no CSR approval required.
- Get a shell into the pod and start the CockroachDB built-in SQL client:
$ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public
# Welcome to the cockroach SQL interface.
# All statements must be terminated by a semicolon.
# To exit: CTRL + D.
#
# Server version: CockroachDB CCL v1.1.2 (linux amd64, built 2017/11/02 19:32:03, go1.8.3) (same version as client)
# Cluster ID: 3292fe08-939f-4638-b8dd-848074611dba
#
# Enter \? for a brief introduction.
#
root@cockroachdb-public:26257/>
- Run some basic CockroachDB SQL statements:
> CREATE DATABASE bank;
> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
> INSERT INTO bank.accounts VALUES (1, 1000.50);
> SELECT * FROM bank.accounts;
+----+---------+
| id | balance |
+----+---------+
| 1 | 1000.5 |
+----+---------+
(1 row)
> CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
You will need this username and password to access the Admin UI later.
- Exit the SQL shell and pod:
> \q
From your local workstation, use our
client-secure.yaml
file to launch a pod and keep it running indefinitely.- Download the file:
$ curl -OOOOOOOOO \
https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
In the file, change
serviceAccountName: cockroachdb
toserviceAccountName: my-release-cockroachdb
.Use the file to launch a pod and keep it running indefinitely:
$ kubectl create -f client-secure.yaml
pod "cockroachdb-client-secure" created
The pod uses the root
client certificate created earlier to initialize the cluster, so there's no CSR approval required.
- Get a shell into the pod and start the CockroachDB built-in SQL client:
$ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public
# Welcome to the cockroach SQL interface.
# All statements must be terminated by a semicolon.
# To exit: CTRL + D.
#
# Server version: CockroachDB CCL v1.1.2 (linux amd64, built 2017/11/02 19:32:03, go1.8.3) (same version as client)
# Cluster ID: 3292fe08-939f-4638-b8dd-848074611dba
#
# Enter \? for a brief introduction.
#
root@my-release-cockroachdb-public:26257/>
- Run some basic CockroachDB SQL statements:
> CREATE DATABASE bank;
> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
> INSERT INTO bank.accounts VALUES (1, 1000.50);
> SELECT * FROM bank.accounts;
+----+---------+
| id | balance |
+----+---------+
| 1 | 1000.5 |
+----+---------+
(1 row)
> CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
You will need this username and password to access the Admin UI later.
- Exit the SQL shell and pod:
> \q
Tip:
This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other cockroach
client commands (e.g., cockroach node
), repeat step 2 using the appropriate cockroach
command.
If you'd prefer to delete the pod and recreate it when needed, run kubectl delete pod cockroachdb-client-secure
.
Step 4. Access the Admin UI
To access the cluster's Admin UI:
- Port-forward from your local machine to one of the pods:
$ kubectl port-forward cockroachdb-0 8080
$ kubectl port-forward my-release-cockroachdb-0 8080
Forwarding from 127.0.0.1:8080 -> 8080
Note:
The port-forward
command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl
locally and running the above port-forward
command on your local machine.
Go to https://localhost:8080 and log in with the username and password you created earlier.
In the UI, verify that the cluster is running as expected:
- Click View nodes list on the right to ensure that all nodes successfully joined the cluster.
- Click the Databases tab on the left to verify that
bank
is listed.
Step 5. Simulate node failure
Based on the replicas: 3
line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.
To see this in action:
- Kill one of CockroachDB nodes:
$ kubectl delete pod cockroachdb-2
pod "cockroachdb-2" deleted
$ kubectl delete pod my-release-cockroachdb-2
pod "my-release-cockroachdb-2" deleted
In the Admin UI, the Cluster Overview will soon show one node as Suspect. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.
Back in the terminal, verify that the pod was automatically restarted:
$ kubectl get pod cockroachdb-2
NAME READY STATUS RESTARTS AGE
cockroachdb-2 1/1 Running 0 12s
$ kubectl get pod my-release-cockroachdb-2
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-2 1/1 Running 0 44s
Step 6. Add nodes
- Use the
kubectl scale
command to add a pod for another CockroachDB node:
$ kubectl scale statefulset cockroachdb --replicas=4
statefulset "cockroachdb" scaled
$ kubectl scale statefulset my-release-cockroachdb --replicas=4
statefulset "my-release-cockroachdb" scaled
- Verify that the pod for a fourth node,
cockroachdb-3
, was added successfully:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cockroachdb-0 1/1 Running 0 28m
cockroachdb-1 1/1 Running 0 27m
cockroachdb-2 1/1 Running 0 10m
cockroachdb-3 1/1 Running 0 5s
example-545f866f5-2gsrs 1/1 Running 0 25m
NAME READY STATUS RESTARTS AGE
my-release-cockroachdb-0 1/1 Running 0 28m
my-release-cockroachdb-1 1/1 Running 0 27m
my-release-cockroachdb-2 1/1 Running 0 10m
my-release-cockroachdb-3 1/1 Running 0 5s
example-545f866f5-2gsrs 1/1 Running 0 25m
Step 7. Remove nodes
To safely remove a node from your cluster, you must first decommission the node and only then adjust the —replicas
value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
Warning:
If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see Decommission Nodes.
- Get a shell into the
cockroachdb-client-secure
pod you created earlier and use thecockroach node status
command to get the internal IDs of nodes:
$ kubectl exec -it cockroachdb-client-secure -- ./cockroach node status --certs-dir=/cockroach-certs --host=cockroachdb-public
id | address | build | started_at | updated_at | is_available | is_live
+----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
(4 rows)
$ kubectl exec -it cockroachdb-client-secure -- ./cockroach node status --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public
id | address | build | started_at | updated_at | is_available | is_live
+----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
(4 rows)
The pod uses the root
client certificate created earlier to initialize the cluster, so there's no CSR approval required.
- Note the ID of the node with the highest number in its address (in this case, the address including
cockroachdb-3
) and use thecockroach node decommission
command to decommission it:
Note:
It's important to decommission the node with the highest number in its address because, when you reduce the —replica
count, Kubernetes will remove the pod for that node.
$ kubectl exec -it cockroachdb-client-secure -- ./cockroach node decommission <node ID> --insecure --host=cockroachdb-public
$ kubectl exec -it cockroachdb-client-secure -- ./cockroach node decommission <node ID> --insecure --host=my-release-cockroachdb-public
You'll then see the decommissioning status print to stderr
as it changes:
id | is_live | replicas | is_decommissioning | is_draining
+---+---------+----------+--------------------+-------------+
4 | true | 73 | true | false
(1 row)
Once the node has been fully decommissioned and stopped, you'll see a confirmation:
id | is_live | replicas | is_decommissioning | is_draining
+---+---------+----------+--------------------+-------------+
4 | true | 0 | true | false
(1 row)
No more data reported on target nodes. Please verify cluster health before removing the nodes.
- Once the node has been decommissioned, use the
kubectl scale
command to remove a pod from your StatefulSet:
$ kubectl scale statefulset cockroachdb --replicas=3
statefulset "cockroachdb" scaled
$ kubectl scale statefulset my-release-cockroachdb --replicas=3
statefulset "my-release-cockroachdb" scaled
Step 8. Stop the cluster
- If you plan to restart the cluster, use the
minikube stop
command. This shuts down the minikube virtual machine but preserves all the resources you created:
$ minikube stop
Stopping local Kubernetes cluster...
Machine stopped.
You can restore the cluster to its previous state with minikube start
.
- If you do not plan to restart the cluster, use the
minikube delete
command. This shuts down and deletes the minikube virtual machine and all the resources you created, including persistent volumes:
$ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
Tip:
To retain logs, copy them from each pod's stderr
before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>
.
See also
Explore other core CockroachDB benefits and features: