This page provides some performance benchmarks that give an idea of the overhead of using the OPA-Envoy plugin.

Test Setup

The setup uses the same example Go application that’s described in the standalone Envoy tutorial. Below are some more details about the setup:

  • Platform: Minikube
  • Kubernetes Version: 1.18.6
  • Envoy Version: 1.17.0
  • OPA-Envoy Version: 0.26.0-envoy

Benchmarks

The benchmark result below provides the percentile distribution of the latency observed by sending 100 requests/sec to the sample application. Each request makes a GET call to the /people endpoint exposed by the application.

The graph shows the latency distribution when the load test is performed under the following conditions:

  • App Only

In this case, the graph documents the latency distribution observed when requests are sent directly to the application ie. no Envoy and OPA in the request path. This scenario is depicted by the blue curve.

  • App and Envoy

In this case, the distribution is with Envoy External Authorization API disabled. This means OPA is not included in the request path but Envoy is. This scenario is depicted by the red curve.

  • App, Envoy and OPA (NOP policy)

In the case, we will see the latency observed with Envoy External Authorization API enabled. This means Envoy will make a call to OPA on every incoming request. The graph explores the effect of loading the below NOP policy into OPA. This scenario is depicted by the green curve.

  1. package envoy.authz
  2. default allow = true
  • App, Envoy and OPA (RBAC policy)

In the case, we will see the latency observed with Envoy External Authorization API enabled and explore the effect of loading the following RBAC policy into OPA. This scenario is depicted by the yellow curve.

  1. package envoy.authz
  2. import input.attributes.request.http as http_request
  3. default allow = false
  4. allow {
  5. roles_for_user[r]
  6. required_roles[r]
  7. }
  8. roles_for_user[r] {
  9. r := user_roles[user_name][_]
  10. }
  11. required_roles[r] {
  12. perm := role_perms[r][_]
  13. perm.method = http_request.method
  14. perm.path = http_request.path
  15. }
  16. user_name = parsed {
  17. [_, encoded] := split(http_request.headers.authorization, " ")
  18. [parsed, _] := split(base64url.decode(encoded), ":")
  19. }
  20. user_roles = {
  21. "alice": ["guest"],
  22. "bob": ["admin"]
  23. }
  24. role_perms = {
  25. "guest": [
  26. {"method": "GET", "path": "/people"},
  27. ],
  28. "admin": [
  29. {"method": "GET", "path": "/people"},
  30. {"method": "POST", "path": "/people"},
  31. ],
  32. }

Performance - 图1

The above four scenarios are replicated to measure the latency distribution now by sending 1000 requests/sec to the sample application. The following graph captures this result.

Performance - 图2

OPA Benchmarks

The table below captures the gRPC Server Handler and OPA Evaluation time with Envoy External Authorization API enabled and the RBAC policy described above loaded into OPA. All values are in microseconds.

OPA Evaluation

OPA Evaluation is the time taken to evaluate the policy.

Number of Requests per sec75%90%95%99%99.9%99.99%MeanMedian
100419.568686.746962.6734048.89914549.44614680.476467.001311.939
1000272.289441.121765.3842766.15263938.73965609.013380.009207.277
2000278.970720.7161830.8844104.18235013.07435686.142450.875178.829
3000266.105693.8391824.9835069.019368469.802375877.246971.173175.948
4000373.6991087.2242279.9814735.96195769.55996310.587665.828218.180
5000303.8711188.7182321.2166116.459317098.375325740.476865.961188.054
gRPC Server Handler

gRPC Server Handler is the total time taken to prepare the input for the policy, evaluate the policy (OPA Evaluation) and prepare the result.

Number of Requests per sec75%90%95%99%99.9%99.99%MeanMedian
100825.1121170.6991882.7976559.08715583.93415651.395862.647613.916
1000536.859957.5861928.7854606.781139058.276141515.222884.912397.676
2000564.3861784.6712794.50543412.251271882.085272075.7612008.655351.330
3000538.3762292.6573014.67532718.355364730.469370538.3091799.534322.755
4000708.9052397.7694134.862316881.804636688.855637773.1527054.173400.242
5000620.2522197.6133548.392176699.779556518.400558795.9784581.492339.063
Resource Utilization

The following table records the CPU and memory usage for the OPA-Envoy container. These metrics were obtained using the kubectl top command. No resource limits were specified for the OPA-Envoy container.

Number of Requests per secCPU(cores)Memory(bytes)
100253m21Mi
1000563m52Mi
2000906m121Mi
3000779m117Mi
4000920m159Mi
5000828m116Mi

In the analysis so far, the gRPC client used in Envoy’s External authorization filter configuration is the Google C++ gRPC client. The following graph displays the latency distribution for the same four conditions described previously (ie. App Only, App and Envoy, App, Envoy and OPA (NOP policy) and App, Envoy and OPA (RBAC policy)) by sending 100 requests/sec to the sample application but now using Envoy’s in-built gRPC client.

Performance - 图3

The below graph captures the latency distribution when 1000 requests/sec are sent to the sample application and Envoy’s in-built gRPC client is used.

Performance - 图4

The above graphs show that there is extra latency added when the OPA-Envoy plugin is used as an external authorization service. For example, in the previous graph, the latency for the App, Envoy and OPA (NOP policy) condition between the 90th and 99th percentile is at least double than that for App and Envoy.

The following graphs show the latency distribution for the App, Envoy and OPA (NOP policy) and App, Envoy and OPA (RBAC policy) condition and plot the latencies seen by using the Google C++ gRPC client and Envoy’s in-built gRPC client in the External authorization filter configuration. The first graph is when 100 requests/sec are sent to the application while the second one for 1000 requests/sec.

Performance - 图5

Performance - 图6