Pattern replace token filter

The pattern_replace token filter allows you to modify tokens using regular expressions. This filter replaces patterns in tokens with the specified values, giving you flexibility in transforming or normalizing tokens before indexing them. It’s particularly useful when you need to clean or standardize text during analysis.

Parameters

The pattern_replace token filter can be configured with the following parameters.

ParameterRequired/OptionalData typeDescription
patternRequiredStringA regular expression pattern that matches the text that needs to be replaced.
allOptionalBooleanWhether to replace all pattern matches. If false, only the first match is replaced. Default is true.
replacementOptionalStringA string with which to replace the matched pattern. Default is an empty string.

Example

The following example request creates a new index named text_index and configures an analyzer with a pattern_replace filter to replace tokens containing digits with the string [NUM]:

  1. PUT /text_index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "number_replace_filter": {
  7. "type": "pattern_replace",
  8. "pattern": "\\d+",
  9. "replacement": "[NUM]"
  10. }
  11. },
  12. "analyzer": {
  13. "number_analyzer": {
  14. "tokenizer": "standard",
  15. "filter": [
  16. "lowercase",
  17. "number_replace_filter"
  18. ]
  19. }
  20. }
  21. }
  22. }
  23. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /text_index/_analyze
  2. {
  3. "text": "Visit us at 98765 Example St.",
  4. "analyzer": "number_analyzer"
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {
  4. "token": "visit",
  5. "start_offset": 0,
  6. "end_offset": 5,
  7. "type": "<ALPHANUM>",
  8. "position": 0
  9. },
  10. {
  11. "token": "us",
  12. "start_offset": 6,
  13. "end_offset": 8,
  14. "type": "<ALPHANUM>",
  15. "position": 1
  16. },
  17. {
  18. "token": "at",
  19. "start_offset": 9,
  20. "end_offset": 11,
  21. "type": "<ALPHANUM>",
  22. "position": 2
  23. },
  24. {
  25. "token": "[NUM]",
  26. "start_offset": 12,
  27. "end_offset": 17,
  28. "type": "<NUM>",
  29. "position": 3
  30. },
  31. {
  32. "token": "example",
  33. "start_offset": 18,
  34. "end_offset": 25,
  35. "type": "<ALPHANUM>",
  36. "position": 4
  37. },
  38. {
  39. "token": "st",
  40. "start_offset": 26,
  41. "end_offset": 28,
  42. "type": "<ALPHANUM>",
  43. "position": 5
  44. }
  45. ]
  46. }