- Linear Scalability
- 1. Setup - create universe
- 2. Run sample key-value app
- 3. Observe IOPS per node
- 4. Add node and observe linear scale out
- 5. Remove node and observe linear scale in
- 6. Clean up (optional)
- 1. Setup - create universe
- 2. Run sample key-value app
- 3. Observe IOPS per node
- 4. Add node and observe linear scale out
- 5. Remove node and observe linear scale in
- 6. Clean up (optional)
- 1. Setup - create universe
- 2. Run sample key-value app
- 3. Observe IOPS per node
- 4. Add node and observe linear scale out
- 5. Remove node and observe linear scale in
- 6. Clean up (optional)
- 1. Setup - create universe
- 2. Check cluster status with Admin UI
- 3. Add node and observe linear scale out
- 4. Scale back down to 3 nodes
- Step 6. Clean up (optional)
Linear Scalability
AttentionThis page documents an earlier version. Go to the latest (v2.1)version.
With YugabyteDB, you can add nodes to scale your cluster up very efficiently and reliably in order to achieve more read and write IOPS. In this tutorial, we will look at how YugabyteDB can scale while a workload is running. We will run a read-write workload using a pre-packaged sample application against a 3-node local cluster with a replication factor of 3, and add nodes to it while the workload is running. We will then observe how the cluster scales out, by verifying that the number of read/write IOPS are evenly distributed across all the nodes at all times.
If you haven’t installed YugabyteDB yet, do so first by following the Quick Start guide.
1. Setup - create universe
If you have a previously running local universe, destroy it using the following.
$ ./bin/yb-ctl destroy
Start a new local cluster - by default, this will create a 3-node universe with a replication factor of 3. We configure the number of shards(aka tablets) per table per tserver to 4 so that we can better observe the load balancing during scale-up and scale-down. Each table will now have 4 tablet-leaders in each tserver and with replication factor 3, there will be 2 tablet-followers for each tablet-leader distributed in the 2 other tservers. So each tserver will have 12 tablets (i.e. sum of 4 tablet-leaders and 8 tablet-followers) per table.
$ ./bin/yb-ctl --num_shards_per_tserver 4 create
2. Run sample key-value app
Run the Cassandra sample key-value app against the local universe by typing the following command.
$ java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
--nodes 127.0.0.1:9042 \
--num_threads_write 1 \
--num_threads_read 4 \
--value_size 4096
The sample application prints some stats while running, which is also shown below. You can read more details about the output of the sample applications here.
2018-05-10 09:10:19,538 [INFO|...] Read: 8988.22 ops/sec (0.44 ms/op), 818159 total ops | Write: 1095.77 ops/sec (0.91 ms/op), 97120 total ops | ...
2018-05-10 09:10:24,539 [INFO|...] Read: 9110.92 ops/sec (0.44 ms/op), 863720 total ops | Write: 1034.06 ops/sec (0.97 ms/op), 102291 total ops | ...
3. Observe IOPS per node
You can check a lot of the per-node stats by browsing to the tablet-servers page. It should look like this. The total read and write IOPS per node are highlighted in the screenshot below. Note that both the reads and the writes are roughly the same across all the nodes indicating uniform usage across the nodes.
4. Add node and observe linear scale out
Add a node to the universe.
$ ./bin/yb-ctl --num_shards_per_tserver 4 add_node
Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes. The 36 tablets will now get distributed evenly across all the 4 nodes, leading to each node having 9 tablets.
The YugabyteDB universe automatically let the client know to use the newly added node for serving queries. This scaling out of client queries is completely transparent to the application logic, allowing the application to scale linearly for both reads and writes.
5. Remove node and observe linear scale in
Remove the recently added node from the universe.
$ ./bin/yb-ctl remove_node 4
- Refresh the tablet-servers page to see the stats update. The
Time since heartbeat
value for that node will keep increasing. Once that number reaches 60s (i.e. 1 minute), YugabyteDB will change the status of that node from ALIVE to DEAD. Note that at this time the universe is running in an under-replicated state for some subset of tablets.
- After 300s (i.e. 5 minutes), YugabyteDB’s remaining nodes will re-spawn new tablets that were lost with the loss of node 4. Each remaining node’s tablet count will increase from 9 to 12, thus getting back to the original state of 36 total tablets.
6. Clean up (optional)
Optionally, you can shutdown the local cluster created in Step 1.
$ ./bin/yb-ctl destroy
1. Setup - create universe
If you have a previously running local universe, destroy it using the following.
$ ./bin/yb-ctl destroy
Start a new local cluster - by default, this will create a 3-node universe with a replication factor of 3. We configure the number of shards(aka tablets) per table per tserver to 4 so that we can better observe the load balancing during scale-up and scale-down. Each table will now have 4 tablet-leaders in each tserver and with replication factor 3, there will be 2 tablet-followers for each tablet-leader distributed in the 2 other tservers. So each tserver will have 12 tablets (i.e. sum of 4 tablet-leaders and 8 tablet-followers) per table.
$ ./bin/yb-ctl --num_shards_per_tserver 4 create
2. Run sample key-value app
Run the Cassandra sample key-value app against the local universe by typing the following command.
$ java -jar ./java/yb-sample-apps.jar --workload CassandraKeyValue \
--nodes 127.0.0.1:9042 \
--num_threads_write 1 \
--num_threads_read 4 \
--value_size 4096
The sample application prints some stats while running, which is also shown below. You can read more details about the output of the sample applications here.
2018-05-10 09:10:19,538 [INFO|...] Read: 8988.22 ops/sec (0.44 ms/op), 818159 total ops | Write: 1095.77 ops/sec (0.91 ms/op), 97120 total ops | ...
2018-05-10 09:10:24,539 [INFO|...] Read: 9110.92 ops/sec (0.44 ms/op), 863720 total ops | Write: 1034.06 ops/sec (0.97 ms/op), 102291 total ops | ...
3. Observe IOPS per node
You can check a lot of the per-node stats by browsing to the tablet-servers page. It should look like this. The total read and write IOPS per node are highlighted in the screenshot below. Note that both the reads and the writes are roughly the same across all the nodes indicating uniform usage across the nodes.
4. Add node and observe linear scale out
Add a node to the universe.
$ ./bin/yb-ctl --num_shards_per_tserver 4 add_node
Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes. The 36 tablets will now get distributed evenly across all the 4 nodes, leading to each node having 9 tablets.
The YugabyteDB universe automatically let the client know to use the newly added node for serving queries. This scaling out of client queries is completely transparent to the application logic, allowing the application to scale linearly for both reads and writes.
5. Remove node and observe linear scale in
Remove the recently added node from the universe.
$ ./bin/yb-ctl remove_node 4
- Refresh the tablet-servers page to see the stats update. The
Time since heartbeat
value for that node will keep increasing. Once that number reaches 60s (i.e. 1 minute), YugabyteDB will change the status of that node from ALIVE to DEAD. Note that at this time the universe is running in an under-replicated state for some subset of tablets.
- After 300s (i.e. 5 minutes), YugabyteDB’s remaining nodes will re-spawn new tablets that were lost with the loss of node 4. Each remaining node’s tablet count will increase from 9 to 12, thus getting back to the original state of 36 total tablets.
6. Clean up (optional)
Optionally, you can shutdown the local cluster created in Step 1.
$ ./bin/yb-ctl destroy
1. Setup - create universe
If you have a previously running local universe, destroy it using the following.
$ ./yb-docker-ctl destroy
Start a new local cluster. By default, this will create a 3-node universe with a replication factor of 3. We configure the number of shards (aka tablets) per table per tserver to 4 so that we can better observe the load balancing during scale-up and scale-down. Each table will now have 4 tablet-leaders in each tserver and with replication factor 3, there will be 2 tablet-followers for each tablet-leader distributed in the 2 other tservers. So each tserver will have 12 tablets (i.e. sum of 4 tablet-leaders and 8 tablet-followers) per table.
$ ./yb-docker-ctl create --num_shards_per_tserver 4
2. Run sample key-value app
Run the Cassandra sample key-value app against the local universe by typing the following command.
$ docker cp yb-master-n1:/home/yugabyte/java/yb-sample-apps.jar .
$ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \
--nodes localhost:9042 \
--num_threads_write 1 \
--num_threads_read 4 \
--value_size 4096
The sample application prints some stats while running, which is also shown below. You can read more details about the output of the sample applications here.
2017-11-20 14:02:48,114 [INFO|...] Read: 9893.73 ops/sec (0.40 ms/op), 233458 total ops |
Write: 1155.83 ops/sec (0.86 ms/op), 28072 total ops | ...
2017-11-20 14:02:53,118 [INFO|...] Read: 9639.85 ops/sec (0.41 ms/op), 281696 total ops |
Write: 1078.74 ops/sec (0.93 ms/op), 33470 total ops | ...
3. Observe IOPS per node
You can check a lot of the per-node stats by browsing to the tablet-servers page. It should look like this. The total read and write IOPS per node are highlighted in the screenshot below. Note that both the reads and the writes are roughly the same across all the nodes indicating uniform usage across the nodes.
4. Add node and observe linear scale out
Add a node to the universe.
$ ./yb-docker-ctl add_node --num_shards_per_tserver 4
Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. In a short time, you should see the new node performing a comparable number of reads and writes as the other nodes.
5. Remove node and observe linear scale in
Remove the recently added node from the universe.
$ ./yb-docker-ctl remove_node 4
- Refresh the tablet-servers page to see the stats update. The
Time since heartbeat
value for that node will keep increasing. Once that number reaches 60s (i.e. 1 minute), YugabyteDB will change the status of that node from ALIVE to DEAD. Note that at this time the universe is running in an under-replicated state for some subset of tablets.
- After 300s (i.e. 5 minutes), YugabyteDB’s remaining nodes will re-spawn new tablets that were lost with the loss of node 4. Each remaining node’s tablet count will increase from 18 to 24.
6. Clean up (optional)
Optionally, you can shutdown the local cluster created in Step 1.
$ ./yb-docker-ctl destroy
1. Setup - create universe
If you have a previously running local universe, destroy it using the following.
$ kubectl delete -f yugabyte-statefulset.yaml
Start a new local cluster - by default, this will create a 3 node universe with a replication factor of 3.
$ kubectl apply -f yugabyte-statefulset.yaml
Check the Kubernetes dashboard to see the 3 yb-tserver and 3 yb-master pods representing the 3 nodes of the cluster.
$ minikube dashboard
2. Check cluster status with Admin UI
In order to do this, we would need to access the UI on port 7000 exposed by any of the pods in the yb-master
service (one of yb-master-0
, yb-master-1
or yb-master-2
). In order to do so, we find the URL for the yb-master-ui LoadBalancer service.
$ minikube service yb-master-ui --url
http://192.168.99.v1.0:31283
Now, you can view the yb-master-0 Admin UI is available at the above URL.
3. Add node and observe linear scale out
Add a node to the universe.
$ kubectl scale statefulset yb-tserver --replicas=4
Now we should have 4 nodes. Refresh the tablet-servers page to see the stats update. YugabyteDB automatically updates application clients to use the newly added node for serving queries. This scaling out of client queries is completely transparent to the application logic, allowing the application to scale linearly for both reads and writes.
You can also observe the newly added node using the following command.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
yb-master-0 1⁄1 Running 0 5m
yb-master-1 1⁄1 Running 0 5m
yb-master-2 1⁄1 Running 0 5m
yb-tserver-0 1⁄1 Running 1 5m
yb-tserver-1 1⁄1 Running 1 5m
yb-tserver-2 1⁄1 Running 0 5m
yb-tserver-3 1⁄1 Running 0 4m
4. Scale back down to 3 nodes
The cluster can now be scaled back to only 3 nodes.
$ kubectl scale statefulset yb-tserver --replicas=3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
yb-master-0 1⁄1 Running 0 6m
yb-master-1 1⁄1 Running 0 6m
yb-master-2 1⁄1 Running 0 6m
yb-tserver-0 1⁄1 Running 1 6m
yb-tserver-1 1⁄1 Running 1 6m
yb-tserver-2 1⁄1 Running 0 6m
yb-tserver-3 1⁄1 Terminating 0 5m
Step 6. Clean up (optional)
Optionally, you can shutdown the local cluster created in Step 1.
$ kubectl delete -f yugabyte-statefulset.yaml