Galician analyzer

The built-in galician analyzer can be applied to a text field using the following command:

  1. PUT /galician-index
  2. {
  3. "mappings": {
  4. "properties": {
  5. "content": {
  6. "type": "text",
  7. "analyzer": "galician"
  8. }
  9. }
  10. }
  11. }

copy

Stem exclusion

You can use stem_exclusion with this language analyzer using the following command:

  1. PUT index_with_stem_exclusion_galician_analyzer
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "stem_exclusion_galician_analyzer": {
  7. "type": "galician",
  8. "stem_exclusion": ["autoridade", "aceptación"]
  9. }
  10. }
  11. }
  12. }
  13. }

copy

Galician analyzer internals

The galician analyzer is built using the following components:

  • Tokenizer: standard

  • Token filters:

    • lowercase
    • stop (French)
    • keyword
    • stemmer (French)

Custom Galician analyzer

You can create a custom Galician analyzer using the following command:

  1. PUT /galician-index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "galician_stop": {
  7. "type": "stop",
  8. "stopwords": "_galician_"
  9. },
  10. "galician_stemmer": {
  11. "type": "stemmer",
  12. "language": "galician"
  13. },
  14. "galician_keywords": {
  15. "type": "keyword_marker",
  16. "keywords": []
  17. }
  18. },
  19. "analyzer": {
  20. "galician_analyzer": {
  21. "type": "custom",
  22. "tokenizer": "standard",
  23. "filter": [
  24. "lowercase",
  25. "galician_stop",
  26. "galician_keywords",
  27. "galician_stemmer"
  28. ]
  29. }
  30. }
  31. }
  32. },
  33. "mappings": {
  34. "properties": {
  35. "content": {
  36. "type": "text",
  37. "analyzer": "galician_analyzer"
  38. }
  39. }
  40. }
  41. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /galician-index/_analyze
  2. {
  3. "field": "content",
  4. "text": "Os estudantes estudan en Santiago e nas universidades galegas. Os seus números son 123456."
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {"token": "estud","start_offset": 3,"end_offset": 13,"type": "<ALPHANUM>","position": 1},
  4. {"token": "estud","start_offset": 14,"end_offset": 21,"type": "<ALPHANUM>","position": 2},
  5. {"token": "santiag","start_offset": 25,"end_offset": 33,"type": "<ALPHANUM>","position": 4},
  6. {"token": "univers","start_offset": 40,"end_offset": 53,"type": "<ALPHANUM>","position": 7},
  7. {"token": "galeg","start_offset": 54,"end_offset": 61,"type": "<ALPHANUM>","position": 8},
  8. {"token": "numer","start_offset": 71,"end_offset": 78,"type": "<ALPHANUM>","position": 11},
  9. {"token": "son","start_offset": 79,"end_offset": 82,"type": "<ALPHANUM>","position": 12},
  10. {"token": "123456","start_offset": 83,"end_offset": 89,"type": "<NUM>","position": 13}
  11. ]
  12. }