elasticsearch-logger

Description

The elasticsearch-logger Plugin is used to forward logs to Elasticsearch for analysis and storage.

When the Plugin is enabled, APISIX will serialize the request context information to Elasticsearch Bulk format and submit it to the batch queue. When the maximum batch size is exceeded, the data in the queue is pushed to Elasticsearch. See batch processor for more details.

Attributes

NameTypeRequiredDefaultDescription
endpoint_addrstringDeprecatedDeprecated. Use endpoint_addrs instead. Elasticsearch API.
endpoint_addrsarrayTrueElasticsearch API. If multiple endpoints are configured, they will be written randomly.
fieldarrayTrueElasticsearch field configuration.
field.indexstringTrueElasticsearch _index field.
field.typestringFalseElasticsearch default valueElasticsearch _type field.
log_formatobjectFalseLog format declared as key value pairs in JSON format. Values only support strings. APISIX or Nginx variables can be used by prefixing the string with $.
autharrayFalseElasticsearch authentication configuration.
auth.usernamestringTrueElasticsearch authentication username.
auth.passwordstringTrueElasticsearch authentication password.
ssl_verifybooleanFalsetrueWhen set to true enables SSL verification as per OpenResty docs.
timeoutintegerFalse10Elasticsearch send data timeout in seconds.
include_req_bodybooleanFalsefalseWhen set to true includes the request body in the log. If the request body is too big to be kept in the memory, it can’t be logged due to Nginx’s limitations.
include_req_body_exprarrayFalseFilter for when the include_req_body attribute is set to true. Request body is only logged when the expression set here evaluates to true. See lua-resty-expr for more.
include_resp_bodybooleanFalsefalseWhen set to true includes the response body in the log.
include_resp_body_exprarrayFalseWhen the include_resp_body attribute is set to true, use this to filter based on lua-resty-expr. If present, only logs the response if the expression evaluates to true.

NOTE: encrypt_fields = {"auth.password"} is also defined in the schema, which means that the field will be stored encrypted in etcd. See encrypted storage fields.

This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every 5 seconds or when the data in the queue reaches 1000. See Batch Processor for more information or setting your custom configuration.

Example of default log format

  1. {
  2. "upstream_latency": 2,
  3. "apisix_latency": 100.9999256134,
  4. "request": {
  5. "size": 59,
  6. "url": "http://localhost:1984/hello",
  7. "method": "GET",
  8. "querystring": {},
  9. "headers": {
  10. "host": "localhost",
  11. "connection": "close"
  12. },
  13. "uri": "/hello"
  14. },
  15. "server": {
  16. "version": "3.7.0",
  17. "hostname": "localhost"
  18. },
  19. "client_ip": "127.0.0.1",
  20. "upstream": "127.0.0.1:1980",
  21. "response": {
  22. "status": 200,
  23. "headers": {
  24. "content-length": "12",
  25. "connection": "close",
  26. "content-type": "text/plain",
  27. "server": "APISIX/3.7.0"
  28. },
  29. "size": 118
  30. },
  31. "start_time": 1704524807607,
  32. "route_id": "1",
  33. "service_id": "",
  34. "latency": 102.9999256134
  35. }

Enable Plugin

Full configuration

The example below shows a complete configuration of the Plugin on a specific Route:

elasticsearch-logger - 图1note

You can fetch the admin_key from config.yaml and save to an environment variable with the following command:

  1. admin_key=$(yq '.deployment.admin.admin_key[0].key' conf/config.yaml | sed 's/"//g')
  1. curl http://127.0.0.1:9180/apisix/admin/routes/1 \
  2. -H "X-API-KEY: $admin_key" -X PUT -d '
  3. {
  4. "plugins":{
  5. "elasticsearch-logger":{
  6. "endpoint_addr":"http://127.0.0.1:9200",
  7. "field":{
  8. "index":"services",
  9. "type":"collector"
  10. },
  11. "auth":{
  12. "username":"elastic",
  13. "password":"123456"
  14. },
  15. "ssl_verify":false,
  16. "timeout": 60,
  17. "retry_delay":1,
  18. "buffer_duration":60,
  19. "max_retry_count":0,
  20. "batch_max_size":1000,
  21. "inactive_timeout":5,
  22. "name":"elasticsearch-logger"
  23. }
  24. },
  25. "upstream":{
  26. "type":"roundrobin",
  27. "nodes":{
  28. "127.0.0.1:1980":1
  29. }
  30. },
  31. "uri":"/elasticsearch.do"
  32. }'

Minimal configuration example

The example below shows a bare minimum configuration of the Plugin on a Route:

  1. curl http://127.0.0.1:9180/apisix/admin/routes/1 \
  2. -H "X-API-KEY: $admin_key" -X PUT -d '
  3. {
  4. "plugins":{
  5. "elasticsearch-logger":{
  6. "endpoint_addr":"http://127.0.0.1:9200",
  7. "field":{
  8. "index":"services"
  9. }
  10. }
  11. },
  12. "upstream":{
  13. "type":"roundrobin",
  14. "nodes":{
  15. "127.0.0.1:1980":1
  16. }
  17. },
  18. "uri":"/elasticsearch.do"
  19. }'

Example usage

Once you have configured the Route to use the Plugin, when you make a request to APISIX, it will be logged in your Elasticsearch server:

  1. curl -i http://127.0.0.1:9080/elasticsearch.do\?q\=hello
  2. HTTP/1.1 200 OK
  3. ...
  4. hello, world

You should be able to get the log from elasticsearch:

  1. curl -X GET "http://127.0.0.1:9200/services/_search" | jq .
  2. {
  3. "took": 0,
  4. ...
  5. "hits": [
  6. {
  7. "_index": "services",
  8. "_type": "_doc",
  9. "_id": "M1qAxYIBRmRqWkmH4Wya",
  10. "_score": 1,
  11. "_source": {
  12. "apisix_latency": 0,
  13. "route_id": "1",
  14. "server": {
  15. "version": "2.15.0",
  16. "hostname": "apisix"
  17. },
  18. "request": {
  19. "size": 102,
  20. "uri": "/elasticsearch.do?q=hello",
  21. "querystring": {
  22. "q": "hello"
  23. },
  24. "headers": {
  25. "user-agent": "curl/7.29.0",
  26. "host": "127.0.0.1:9080",
  27. "accept": "*/*"
  28. },
  29. "url": "http://127.0.0.1:9080/elasticsearch.do?q=hello",
  30. "method": "GET"
  31. },
  32. "service_id": "",
  33. "latency": 0,
  34. "upstream": "127.0.0.1:1980",
  35. "upstream_latency": 1,
  36. "client_ip": "127.0.0.1",
  37. "start_time": 1661170929107,
  38. "response": {
  39. "size": 192,
  40. "headers": {
  41. "date": "Mon, 22 Aug 2022 12:22:09 GMT",
  42. "server": "APISIX/2.15.0",
  43. "content-type": "text/plain; charset=utf-8",
  44. "connection": "close",
  45. "transfer-encoding": "chunked"
  46. },
  47. "status": 200
  48. }
  49. }
  50. }
  51. ]
  52. }
  53. }

Metadata

You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available:

NameTypeRequiredDefaultDescription
log_formatobjectFalseLog format declared as key value pairs in JSON format. Values only support strings. APISIX or Nginx variables can be used by prefixing the string with $.
elasticsearch-logger - 图2IMPORTANT

Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the elasticsearch-logger Plugin.

The example below shows how you can configure through the Admin API:

  1. curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/elasticsearch-logger \
  2. -H "X-API-KEY: $admin_key" -X PUT -d '
  3. {
  4. "log_format": {
  5. "host": "$host",
  6. "@timestamp": "$time_iso8601",
  7. "client_ip": "$remote_addr"
  8. }
  9. }'

With this configuration, your logs would be formatted as shown below:

  1. {"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
  2. {"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}

make a request to APISIX again:

  1. curl -i http://127.0.0.1:9080/elasticsearch.do\?q\=hello
  2. HTTP/1.1 200 OK
  3. ...
  4. hello, world

You should be able to get this log from elasticsearch:

  1. curl -X GET "http://127.0.0.1:9200/services/_search" | jq .
  2. {
  3. "took": 0,
  4. ...
  5. "hits": {
  6. "total": {
  7. "value": 1,
  8. "relation": "eq"
  9. },
  10. "max_score": 1,
  11. "hits": [
  12. {
  13. "_index": "services",
  14. "_type": "_doc",
  15. "_id": "NVqExYIBRmRqWkmH4WwG",
  16. "_score": 1,
  17. "_source": {
  18. "@timestamp": "2022-08-22T20:26:31+08:00",
  19. "client_ip": "127.0.0.1",
  20. "host": "127.0.0.1",
  21. "route_id": "1"
  22. }
  23. }
  24. ]
  25. }
  26. }

Disable Metadata

  1. curl http://127.0.0.1:9180/apisix/admin/plugin_metadata/elasticsearch-logger \
  2. -H "X-API-KEY: $admin_key" -X DELETE

Delete Plugin

To remove the elasticsearch-logger Plugin, you can delete the corresponding JSON configuration from the Plugin configuration. APISIX will automatically reload and you do not have to restart for this to take effect.

  1. curl http://127.0.0.1:9180/apisix/admin/routes/1 \
  2. -H "X-API-KEY: $admin_key" -X PUT -d '
  3. {
  4. "plugins":{},
  5. "upstream":{
  6. "type":"roundrobin",
  7. "nodes":{
  8. "127.0.0.1:1980":1
  9. }
  10. },
  11. "uri":"/elasticsearch.do"
  12. }'