You are browsing documentation for an older version. See the latest documentation here.

Configuring a gRPC Service

Note: This guide assumes familiarity with gRPC. For learning how to set up Kong with an upstream REST API, check out the Configuring a service guide.

gRPC proxying is natively supported in Kong. In this guide, you’ll learn how to configure Kong to manage your gRPC services. For the purpose of this guide, we’ll use grpcurl and grpcbin - they provide a gRPC client and gRPC services, respectively.

The guide sets up two examples:

  • A single gRPC service and route, with a single catch-all route that proxies all matching gRPC traffic to an upstream gRPC service.
  • A single gRPC service with multiple routes, demonstrating how to use a route per gRPC method.

In Kong, gRPC support assumes gRPC over HTTP/2 framing. Make sure you have at least one HTTP/2 proxy listener.

The following examples assume that Kong is running and listening for HTTP/2 proxy requests on port 9080.

Single gRPC service and route

  1. Issue the following request to create a gRPC service. For example, if your gRPC server is listening on localhost, port 15002:

    1. curl -XPOST localhost:8001/services \
    2. --data name=grpc \
    3. --data protocol=grpc \
    4. --data host=localhost \
    5. --data port=15002
  2. Issue the following request to create a gRPC route:

    1. curl -XPOST localhost:8001/services/grpc/routes \
    2. --data protocols=grpc \
    3. --data name=catch-all \
    4. --data paths=/
  3. Using the grpcurl command line client, issue the following gRPC request:

    1. grpcurl -v -d '{"greeting": "Kong!"}' \
    2. -plaintext localhost:9080 hello.HelloService.SayHello

    The response should resemble the following:

    1. Resolved method descriptor:
    2. rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse );
    3. Request metadata to send:
    4. (empty)
    5. Response headers received:
    6. content-type: application/grpc
    7. date: Tue, 16 Jul 2019 21:37:36 GMT
    8. server: openresty/1.15.8.1
    9. via: kong/1.2.1
    10. x-kong-proxy-latency: 0
    11. x-kong-upstream-latency: 0
    12. Response contents:
    13. {
    14. "reply": "hello Kong!"
    15. }
    16. Response trailers received:
    17. (empty)
    18. Sent 1 request and received 1 response

Notice that Kong response headers, such as via and x-kong-proxy-latency, were inserted in the response.

Single gRPC service with multiple routes

Building on top of the previous example, let’s create a few more routes for individual gRPC methods.

In this example, the gRPC HelloService service exposes a few different methods, as can be seen in its protocol buffer file.

  1. Create individual routes for its SayHello and LotsOfReplies methods.

    1. Create a route for SayHello:

      1. curl -X POST localhost:8001/services/grpc/routes \
      2. --data protocols=grpc \
      3. --data paths=/hello.HelloService/SayHello \
      4. --data name=say-hello
    2. Create a route for LotsOfReplies:

      1. curl -X POST localhost:8001/services/grpc/routes \
      2. --data protocols=grpc \
      3. --data paths=/hello.HelloService/LotsOfReplies \
      4. --data name=lots-of-replies

    With this setup, gRPC requests to the SayHello method will match the first route, while requests to LotsOfReplies will be routed to the latter.

  2. In kong.conf, set allow_debug_header: on.

  3. Issue a gRPC request to the SayHello method:

    1. grpcurl -v -d '{"greeting": "Kong!"}' \
    2. -H 'kong-debug: 1' -plaintext \
    3. localhost:9080 hello.HelloService.SayHello

    Notice that the example sends the header kong-debug, which causes Kong to insert debugging information in response headers.

    The response should look like:

    1. Resolved method descriptor:
    2. rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse );
    3. Request metadata to send:
    4. kong-debug: 1
    5. Response headers received:
    6. content-type: application/grpc
    7. date: Tue, 16 Jul 2019 21:57:00 GMT
    8. kong-route-id: 390ef3d1-d092-4401-99ca-0b4e42453d97
    9. kong-service-id: d82736b7-a4fd-4530-b575-c68d94c3493a
    10. kong-service-name: s1
    11. server: openresty/1.15.8.1
    12. via: kong/1.2.1
    13. x-kong-proxy-latency: 0
    14. x-kong-upstream-latency: 0
    15. Response contents:
    16. {
    17. "reply": "hello Kong!"
    18. }
    19. Response trailers received:
    20. (empty)
    21. Sent 1 request and received 1 response

    Notice the route ID should refer to the first route we created.

  4. Similarly, let’s issue a request to the LotsOfReplies gRPC method:

    1. grpcurl -v -d '{"greeting": "Kong!"}' \
    2. -H 'kong-debug: 1' -plaintext \
    3. localhost:9080 hello.HelloService.LotsOfReplies

    The response should look like the following:

    1. Resolved method descriptor:
    2. rpc LotsOfReplies ( .hello.HelloRequest ) returns ( stream .hello.HelloResponse );
    3. Request metadata to send:
    4. kong-debug: 1
    5. Response headers received:
    6. content-type: application/grpc
    7. date: Tue, 30 Jul 2019 22:21:40 GMT
    8. kong-route-id: 133659bb-7e88-4ac5-b177-bc04b3974c87
    9. kong-service-id: 31a87674-f984-4f75-8abc-85da478e204f
    10. kong-service-name: grpc
    11. server: openresty/1.15.8.1
    12. via: kong/1.2.1
    13. x-kong-proxy-latency: 14
    14. x-kong-upstream-latency: 0
    15. Response contents:
    16. {
    17. "reply": "hello Kong!"
    18. }
    19. Response contents:
    20. {
    21. "reply": "hello Kong!"
    22. }
    23. Response contents:
    24. {
    25. "reply": "hello Kong!"
    26. }
    27. Response contents:
    28. {
    29. "reply": "hello Kong!"
    30. }
    31. Response contents:
    32. {
    33. "reply": "hello Kong!"
    34. }
    35. Response contents:
    36. {
    37. "reply": "hello Kong!"
    38. }
    39. Response contents:
    40. {
    41. "reply": "hello Kong!"
    42. }
    43. Response contents:
    44. {
    45. "reply": "hello Kong!"
    46. }
    47. Response contents:
    48. {
    49. "reply": "hello Kong!"
    50. }
    51. Response contents:
    52. {
    53. "reply": "hello Kong!"
    54. }
    55. Response trailers received:
    56. (empty)
    57. Sent 1 request and received 10 responses

    Notice that the kong-route-id response header now carries a different value and refers to the second Route created in this page.

Note: Some gRPC clients (typically CLI clients) issue “gRPC Reflection Requests” as a means of determining what methods a server exports and how those methods are called. These requests have a particular path. For example, /grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo is a valid reflection path. As with any proxy request, Kong needs to know how to route these requests. In the current example, they would be routed to the catch-all route whose path is /, matching any path. If no route matches the gRPC reflection request, Kong will respond, as expected, with a 404 Not Found response.

Enabling plugins

Let’s try out the File Log plugin with gRPC.

  1. Issue the following request to enable the File Log plugin on the SayHello route:

    1. curl -X POST localhost:8001/routes/say-hello/plugins \
    2. --data name=file-log \
    3. --data config.path=grpc-say-hello.log
  2. Follow the output of the log as gRPC requests are made to SayHello:

    1. tail -f grpc-say-hello.log
    2. {"latencies":{"request":8,"kong":5,"proxy":3},"service":{"host":"localhost","created_at":1564527408,"connect_timeout":60000,"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","protocol":"grpc","name":"grpc","read_timeout":60000,"port":15002,"updated_at":1564527408,"write_timeout":60000,"retries":5},"request":{"querystring":{},"size":"46","uri":"\/hello.HelloService\/SayHello","url":"http:\/\/localhost:9080\/hello.HelloService\/SayHello","headers":{"host":"localhost:9080","content-type":"application\/grpc","kong-debug":"1","user-agent":"grpc-go\/1.20.0-dev","te":"trailers"},"method":"POST"},"client_ip":"127.0.0.1","tries":[{"balancer_latency":0,"port":15002,"balancer_start":1564527732522,"ip":"127.0.0.1"}],"response":{"headers":{"kong-route-id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","content-type":"application\/grpc","connection":"close","kong-service-name":"grpc","kong-service-id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","kong-route-name":"say-hello","via":"kong\/1.2.1","x-kong-proxy-latency":"5","x-kong-upstream-latency":"3"},"status":200,"size":"298"},"route":{"id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","updated_at":1564527431,"protocols":["grpc"],"created_at":1564527431,"service":{"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c"},"name":"say-hello","preserve_host":false,"regex_priority":0,"strip_path":false,"paths":["\/hello.HelloService\/SayHello"],"https_redirect_status_code":426},"started_at":1564527732516}
    3. {"latencies":{"request":3,"kong":1,"proxy":1},"service":{"host":"localhost","created_at":1564527408,"connect_timeout":60000,"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","protocol":"grpc","name":"grpc","read_timeout":60000,"port":15002,"updated_at":1564527408,"write_timeout":60000,"retries":5},"request":{"querystring":{},"size":"46","uri":"\/hello.HelloService\/SayHello","url":"http:\/\/localhost:9080\/hello.HelloService\/SayHello","headers":{"host":"localhost:9080","content-type":"application\/grpc","kong-debug":"1","user-agent":"grpc-go\/1.20.0-dev","te":"trailers"},"method":"POST"},"client_ip":"127.0.0.1","tries":[{"balancer_latency":0,"port":15002,"balancer_start":1564527733555,"ip":"127.0.0.1"}],"response":{"headers":{"kong-route-id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","content-type":"application\/grpc","connection":"close","kong-service-name":"grpc","kong-service-id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c","kong-route-name":"say-hello","via":"kong\/1.2.1","x-kong-proxy-latency":"1","x-kong-upstream-latency":"1"},"status":200,"size":"298"},"route":{"id":"e49f2df9-3e8e-4bdb-8ce6-2c505eac4ab6","updated_at":1564527431,"protocols":["grpc"],"created_at":1564527431,"service":{"id":"74a95d95-fbe4-4ddb-a448-b8faf07ece4c"},"name":"say-hello","preserve_host":false,"regex_priority":0,"strip_path":false,"paths":["\/hello.HelloService\/SayHello"],"https_redirect_status_code":426},"started_at":1564527733554}