Latvian analyzer

The built-in latvian analyzer can be applied to a text field using the following command:

  1. PUT /latvian-index
  2. {
  3. "mappings": {
  4. "properties": {
  5. "content": {
  6. "type": "text",
  7. "analyzer": "latvian"
  8. }
  9. }
  10. }
  11. }

copy

Stem exclusion

You can use stem_exclusion with this language analyzer using the following command:

  1. PUT index_with_stem_exclusion_latvian_analyzer
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "stem_exclusion_latvian_analyzer": {
  7. "type": "latvian",
  8. "stem_exclusion": ["autoritāte", "apstiprinājums"]
  9. }
  10. }
  11. }
  12. }
  13. }

copy

Latvian analyzer internals

The latvian analyzer is built using the following components:

  • Tokenizer: standard

  • Token filters:

    • lowercase
    • stop (Latvian)
    • keyword
    • stemmer (Latvian)

Custom Latvian analyzer

You can create a custom Latvian analyzer using the following command:

  1. PUT /italian-index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "italian_stop": {
  7. "type": "stop",
  8. "stopwords": "_italian_"
  9. },
  10. "italian_elision": {
  11. "type": "elision",
  12. "articles": [
  13. "c", "l", "all", "dall", "dell",
  14. "nell", "sull", "coll", "pell",
  15. "gl", "agl", "dagl", "degl", "negl",
  16. "sugl", "un", "m", "t", "s", "v", "d"
  17. ],
  18. "articles_case": true
  19. },
  20. "italian_stemmer": {
  21. "type": "stemmer",
  22. "language": "light_italian"
  23. },
  24. "italian_keywords": {
  25. "type": "keyword_marker",
  26. "keywords": []
  27. }
  28. },
  29. "analyzer": {
  30. "italian_analyzer": {
  31. "type": "custom",
  32. "tokenizer": "standard",
  33. "filter": [
  34. "italian_elision",
  35. "lowercase",
  36. "italian_stop",
  37. "italian_keywords",
  38. "italian_stemmer"
  39. ]
  40. }
  41. }
  42. }
  43. },
  44. "mappings": {
  45. "properties": {
  46. "content": {
  47. "type": "text",
  48. "analyzer": "italian_analyzer"
  49. }
  50. }
  51. }
  52. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /latvian-index/_analyze
  2. {
  3. "field": "content",
  4. "text": "Studenti mācās Latvijas universitātēs. Viņu numuri ir 123456."
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {"token": "student","start_offset": 0,"end_offset": 8,"type": "<ALPHANUM>","position": 0},
  4. {"token": "māc","start_offset": 9,"end_offset": 14,"type": "<ALPHANUM>","position": 1},
  5. {"token": "latvij","start_offset": 15,"end_offset": 23,"type": "<ALPHANUM>","position": 2},
  6. {"token": "universitāt","start_offset": 24,"end_offset": 37,"type": "<ALPHANUM>","position": 3},
  7. {"token": "vin","start_offset": 39,"end_offset": 43,"type": "<ALPHANUM>","position": 4},
  8. {"token": "numur","start_offset": 44,"end_offset": 50,"type": "<ALPHANUM>","position": 5},
  9. {"token": "123456","start_offset": 54,"end_offset": 60,"type": "<NUM>","position": 7}
  10. ]
  11. }