ASCII folding token filter

The asciifolding token filter converts non-ASCII characters to their closest ASCII equivalents. For example, é becomes e, ü becomes u, and ñ becomes n. This process is known as transliteration.

The asciifolding token filter offers a number of benefits:

  • Enhanced search flexibility: Users often omit accents or special characters when entering queries. The asciifolding token filter ensures that such queries still return relevant results.
  • Normalization: Standardizes the indexing process by ensuring that accented characters are consistently converted to their ASCII equivalents.
  • Internationalization: Particularly useful for applications including multiple languages and character sets.

While the asciifolding token filter can simplify searches, it may also lead to the loss of specific information, particularly if the distinction between accented and non-accented characters in the dataset is significant.

Parameters

You can configure the asciifolding token filter using the preserve_original parameter. Setting this parameter to true keeps both the original token and its ASCII-folded version in the token stream. This can be particularly useful when you want to match both the original (with accents) and the normalized (without accents) versions of a term in a search query. Default is false.

Example

The following example request creates a new index named example_index and defines an analyzer with the asciifolding filter and preserve_original parameter set to true:

  1. PUT /example_index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "custom_ascii_folding": {
  7. "type": "asciifolding",
  8. "preserve_original": true
  9. }
  10. },
  11. "analyzer": {
  12. "custom_ascii_analyzer": {
  13. "type": "custom",
  14. "tokenizer": "standard",
  15. "filter": [
  16. "lowercase",
  17. "custom_ascii_folding"
  18. ]
  19. }
  20. }
  21. }
  22. }
  23. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /example_index/_analyze
  2. {
  3. "analyzer": "custom_ascii_analyzer",
  4. "text": "Résumé café naïve coördinate"
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {
  4. "token": "resume",
  5. "start_offset": 0,
  6. "end_offset": 6,
  7. "type": "<ALPHANUM>",
  8. "position": 0
  9. },
  10. {
  11. "token": "résumé",
  12. "start_offset": 0,
  13. "end_offset": 6,
  14. "type": "<ALPHANUM>",
  15. "position": 0
  16. },
  17. {
  18. "token": "cafe",
  19. "start_offset": 7,
  20. "end_offset": 11,
  21. "type": "<ALPHANUM>",
  22. "position": 1
  23. },
  24. {
  25. "token": "café",
  26. "start_offset": 7,
  27. "end_offset": 11,
  28. "type": "<ALPHANUM>",
  29. "position": 1
  30. },
  31. {
  32. "token": "naive",
  33. "start_offset": 12,
  34. "end_offset": 17,
  35. "type": "<ALPHANUM>",
  36. "position": 2
  37. },
  38. {
  39. "token": "naïve",
  40. "start_offset": 12,
  41. "end_offset": 17,
  42. "type": "<ALPHANUM>",
  43. "position": 2
  44. },
  45. {
  46. "token": "coordinate",
  47. "start_offset": 18,
  48. "end_offset": 28,
  49. "type": "<ALPHANUM>",
  50. "position": 3
  51. },
  52. {
  53. "token": "coördinate",
  54. "start_offset": 18,
  55. "end_offset": 28,
  56. "type": "<ALPHANUM>",
  57. "position": 3
  58. }
  59. ]
  60. }