Delimited payload token filter

The delimited_payload token filter is used to parse tokens containing payloads during the analysis process. For example, the string red|1.5 fast|2.0 car|1.0 is parsed into the tokens red (with a payload of 1.5), fast (with a payload of 2.0), and car (with a payload of 1.0). This is particularly useful when your tokens include additional associated data (like weights, scores, or other numeric values) that you can use for scoring or custom query logic. The filter can handle different types of payloads, including integers, floats, and strings, and attach payloads (extra metadata) to tokens.

When analyzing text, the delimited_payload token filter parses each token, extracts the payload, and attaches it to the token. This payload can later be used in queries to influence scoring, boosting, or other custom behaviors.

Payloads are stored as Base64-encoded strings. By default, payloads are not returned in the query response along with the tokens. To return the payloads, you must configure additional parameters. For more information, see Example with a stored payload.

Parameters

The delimited_payload token filter has two parameters.

ParameterRequired/OptionalData typeDescription
encodingOptionalStringSpecifies the data type of the payload attached to the tokens. This determines how the payload data is interpreted during analysis and querying.
Valid values are:

- float: The payload is interpreted as a 32-bit floating-point number using IEEE 754 format (for example, 2.5 in car|2.5).
- identity: The payload is interpreted as a sequence of characters (for example, in user|admin, admin is interpreted as a string).
- int: The payload is interpreted as a 32-bit integer (for example, 1 in priority|1).
Default is float.
delimiterOptionalStringSpecifies the character that separates the token from its payload in the input text. Default is the pipe character (|).

Example without a stored payload

The following example request creates a new index named my_index and configures an analyzer with a delimited_payload filter:

  1. PUT /my_index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "my_payload_filter": {
  7. "type": "delimited_payload",
  8. "delimiter": "|",
  9. "encoding": "float"
  10. }
  11. },
  12. "analyzer": {
  13. "my_analyzer": {
  14. "type": "custom",
  15. "tokenizer": "whitespace",
  16. "filter": ["my_payload_filter"]
  17. }
  18. }
  19. }
  20. }
  21. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /my_index/_analyze
  2. {
  3. "analyzer": "my_analyzer",
  4. "text": "red|1.5 fast|2.0 car|1.0"
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {
  4. "token": "red",
  5. "start_offset": 0,
  6. "end_offset": 7,
  7. "type": "word",
  8. "position": 0
  9. },
  10. {
  11. "token": "fast",
  12. "start_offset": 8,
  13. "end_offset": 16,
  14. "type": "word",
  15. "position": 1
  16. },
  17. {
  18. "token": "car",
  19. "start_offset": 17,
  20. "end_offset": 24,
  21. "type": "word",
  22. "position": 2
  23. }
  24. ]
  25. }

Example with a stored payload

To configure the payload to be returned in the response, create an index that stores term vectors and set term_vector to with_positions_payloads or with_positions_offsets_payloads in the index mappings. For example, the following index is configured to store term vectors:

  1. PUT /visible_payloads
  2. {
  3. "mappings": {
  4. "properties": {
  5. "text": {
  6. "type": "text",
  7. "term_vector": "with_positions_payloads",
  8. "analyzer": "custom_analyzer"
  9. }
  10. }
  11. },
  12. "settings": {
  13. "analysis": {
  14. "filter": {
  15. "my_payload_filter": {
  16. "type": "delimited_payload",
  17. "delimiter": "|",
  18. "encoding": "float"
  19. }
  20. },
  21. "analyzer": {
  22. "custom_analyzer": {
  23. "tokenizer": "whitespace",
  24. "filter": [ "my_payload_filter" ]
  25. }
  26. }
  27. }
  28. }
  29. }

copy

You can index a document into this index using the following request:

  1. PUT /visible_payloads/_doc/1
  2. {
  3. "text": "red|1.5 fast|2.0 car|1.0"
  4. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. GET /visible_payloads/_termvectors/1
  2. {
  3. "fields": ["text"]
  4. }

copy

The response contains the generated tokens, which include payloads:

  1. {
  2. "_index": "visible_payloads",
  3. "_id": "1",
  4. "_version": 1,
  5. "found": true,
  6. "took": 3,
  7. "term_vectors": {
  8. "text": {
  9. "field_statistics": {
  10. "sum_doc_freq": 3,
  11. "doc_count": 1,
  12. "sum_ttf": 3
  13. },
  14. "terms": {
  15. "brown": {
  16. "term_freq": 1,
  17. "tokens": [
  18. {
  19. "position": 1,
  20. "start_offset": 10,
  21. "end_offset": 19,
  22. "payload": "QEAAAA=="
  23. }
  24. ]
  25. },
  26. "fox": {
  27. "term_freq": 1,
  28. "tokens": [
  29. {
  30. "position": 2,
  31. "start_offset": 20,
  32. "end_offset": 27,
  33. "payload": "P8AAAA=="
  34. }
  35. ]
  36. },
  37. "quick": {
  38. "term_freq": 1,
  39. "tokens": [
  40. {
  41. "position": 0,
  42. "start_offset": 0,
  43. "end_offset": 9,
  44. "payload": "QCAAAA=="
  45. }
  46. ]
  47. }
  48. }
  49. }
  50. }
  51. }

copy