Predicate token filter

The predicate_token_filter evaluates whether tokens should be kept or discarded, depending on the conditions defined in a custom script. The tokens are evaluated in the analysis predicate context. This filter supports only inline Painless scripts.

Parameters

The predicate_token_filter has one required parameter: script. This parameter provides a condition that is used to evaluate whether the token should be kept.

Example

The following example request creates a new index named predicate_index and configures an analyzer with a predicate_token_filter. The filter specifies to only output tokens if they are longer than 7 characters:

  1. PUT /predicate_index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "my_predicate_filter": {
  7. "type": "predicate_token_filter",
  8. "script": {
  9. "source": "token.term.length() > 7"
  10. }
  11. }
  12. },
  13. "analyzer": {
  14. "predicate_analyzer": {
  15. "tokenizer": "standard",
  16. "filter": [
  17. "lowercase",
  18. "my_predicate_filter"
  19. ]
  20. }
  21. }
  22. }
  23. }
  24. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /predicate_index/_analyze
  2. {
  3. "text": "The OpenSearch community is growing rapidly",
  4. "analyzer": "predicate_analyzer"
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {
  4. "token": "opensearch",
  5. "start_offset": 4,
  6. "end_offset": 14,
  7. "type": "<ALPHANUM>",
  8. "position": 1
  9. },
  10. {
  11. "token": "community",
  12. "start_offset": 15,
  13. "end_offset": 24,
  14. "type": "<ALPHANUM>",
  15. "position": 2
  16. }
  17. ]
  18. }