Concurrency Options

This is a walkthrough of different concurrency options available to control the number of concurrent workers that ghz utilizes to make requests to the server. All examples are done using a simple unary gRPC call.

Many of these options are similar to the load control options, but independently control the concurrent workers utilized.

Step Up Concurrency

  1. ./dist/ghz --insecure --async --proto /protos/helloworld.proto \
  2. --call helloworld.Greeter/SayHello \
  3. -n 10000 --rps 200 \
  4. --concurrency-schedule=step --concurrency-start=5 --concurrency-step=5 --concurrency-end=50 --concurrency-step-duration=5s \
  5. -d '{"name":"{{.WorkerID}}"}' 0.0.0.0:50051
  6. Summary:
  7. Count: 10000
  8. Total: 50.05 s
  9. Slowest: 52.04 ms
  10. Fastest: 50.19 ms
  11. Average: 50.59 ms
  12. Requests/sec: 199.79
  13. Response time histogram:
  14. 50.187 [1] |
  15. 50.373 [1786] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  16. 50.558 [3032] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  17. 50.743 [2822] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  18. 50.929 [1536] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  19. 51.114 [562] |∎∎∎∎∎∎∎
  20. 51.299 [194] |∎∎∎
  21. 51.485 [42] |∎
  22. 51.670 [15] |
  23. 51.855 [6] |
  24. 52.041 [4] |
  25. Latency distribution:
  26. 10 % in 50.33 ms
  27. 25 % in 50.42 ms
  28. 50 % in 50.57 ms
  29. 75 % in 50.73 ms
  30. 90 % in 50.89 ms
  31. 95 % in 51.01 ms
  32. 99 % in 51.24 ms
  33. Status code distribution:

This test performs a constant load at 200 RPS, starting with 5 workers, and increasing concurrency by 5 workers every 5s until we have 50 workers. At that point all 50 workers will be used to sustain the constant 200 RPS until 10000 total request limit is reached. Worker count over time would look something like:

Step Up Concurrency Constant Load

Step Down Concurrency

  1. ./dist/ghz --insecure --async --proto /protos/helloworld.proto \
  2. --call helloworld.Greeter/SayHello \
  3. -n 10000 --rps 200 \
  4. --concurrency-schedule=step --concurrency-start=50 --concurrency-step=-5 \
  5. --concurrency-step-duration=5s --concurrency-max-duration=30s \
  6. -d '{"name":"{{.WorkerID}}"}' 0.0.0.0:50051
  7. Summary:
  8. Count: 10000
  9. Total: 50.05 s
  10. Slowest: 52.13 ms
  11. Fastest: 50.15 ms
  12. Average: 50.63 ms
  13. Requests/sec: 199.79
  14. Response time histogram:
  15. 50.152 [1] |
  16. 50.350 [1145] |∎∎∎∎∎∎∎∎∎∎∎∎∎
  17. 50.548 [2476] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  18. 50.746 [3491] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  19. 50.943 [2202] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  20. 51.141 [490] |∎∎∎∎∎∎
  21. 51.339 [148] |∎∎
  22. 51.536 [30] |
  23. 51.734 [10] |
  24. 51.932 [4] |
  25. 52.130 [3] |
  26. Latency distribution:
  27. 10 % in 50.34 ms
  28. 25 % in 50.47 ms
  29. 50 % in 50.63 ms
  30. 75 % in 50.77 ms
  31. 90 % in 50.89 ms
  32. 95 % in 50.99 ms
  33. 99 % in 51.24 ms
  34. Status code distribution:
  35. [OK] 10000 responses

This test performs a constant load at 200 RPS, starting with 50 workers, and decreasing concurrency by 5 workers every 5s until 30s has elapsed. At that point all remaining workers will be used to sustain the constant 200 RPS until 10000 total request limit is reached. Worker count over time would look something like:

Step Down Concurrency Constant Load

Linear increase of concurrency

  1. ./dist/ghz --insecure --async --proto /protos/helloworld.proto \
  2. --call helloworld.Greeter/SayHello \
  3. -n 10000 --rps 200 \
  4. --concurrency-schedule=line --concurrency-start=20 --concurrency-step=2 --concurrency-max-duration=30s \
  5. -d '{"name":"{{.WorkerID}}"}' 0.0.0.0:50051
  6. Summary:
  7. Count: 10000
  8. Total: 50.05 s
  9. Slowest: 58.54 ms
  10. Fastest: 50.16 ms
  11. Average: 50.60 ms
  12. Requests/sec: 199.79
  13. Response time histogram:
  14. 50.157 [1] |
  15. 50.995 [9515] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  16. 51.834 [477] |∎∎
  17. 52.672 [3] |
  18. 53.510 [0] |
  19. 54.349 [0] |
  20. 55.187 [1] |
  21. 56.025 [0] |
  22. 56.864 [0] |
  23. 57.702 [1] |
  24. 58.540 [2] |
  25. Latency distribution:
  26. 10 % in 50.31 ms
  27. 25 % in 50.40 ms
  28. 50 % in 50.60 ms
  29. 75 % in 50.75 ms
  30. 90 % in 50.89 ms
  31. 95 % in 50.99 ms
  32. 99 % in 51.25 ms
  33. Status code distribution:
  34. [OK] 10000 responses

This test performs a constant load at 200 RPS, starting with 20 workers, and increasing concurrency linearly every 1s by 2 workers until 30s has elapsed. At that point all remaining workers will be used to sustain the constant 200 RPS until 10000 total request limit is reached.

Lene Up Concurrency Constant Load