Trim token filter

The trim token filter removes leading and trailing white space characters from tokens.

Many popular tokenizers, such as standard, keyword, and whitespace tokenizers, automatically strip leading and trailing white space characters during tokenization. When using these tokenizers, there is no need to configure an additional trim token filter.

Example

The following example request creates a new index named my_pattern_trim_index and configures an analyzer with a trim filter and a pattern tokenizer, which does not remove leading and trailing white space characters:

  1. PUT /my_pattern_trim_index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "my_trim_filter": {
  7. "type": "trim"
  8. }
  9. },
  10. "tokenizer": {
  11. "my_pattern_tokenizer": {
  12. "type": "pattern",
  13. "pattern": ","
  14. }
  15. },
  16. "analyzer": {
  17. "my_pattern_trim_analyzer": {
  18. "type": "custom",
  19. "tokenizer": "my_pattern_tokenizer",
  20. "filter": [
  21. "lowercase",
  22. "my_trim_filter"
  23. ]
  24. }
  25. }
  26. }
  27. }
  28. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. GET /my_pattern_trim_index/_analyze
  2. {
  3. "analyzer": "my_pattern_trim_analyzer",
  4. "text": " OpenSearch , is , powerful "
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {
  4. "token": "opensearch",
  5. "start_offset": 0,
  6. "end_offset": 12,
  7. "type": "word",
  8. "position": 0
  9. },
  10. {
  11. "token": "is",
  12. "start_offset": 13,
  13. "end_offset": 18,
  14. "type": "word",
  15. "position": 1
  16. },
  17. {
  18. "token": "powerful",
  19. "start_offset": 19,
  20. "end_offset": 32,
  21. "type": "word",
  22. "position": 2
  23. }
  24. ]
  25. }