CJK analyzer

The built-in cjk analyzer can be applied to a text field using the following command:

  1. PUT /cjk-index
  2. {
  3. "mappings": {
  4. "properties": {
  5. "content": {
  6. "type": "text",
  7. "analyzer": "cjk"
  8. }
  9. }
  10. }
  11. }

copy

Stem exclusion

You can use stem_exclusion with this language analyzer using the following command:

  1. PUT index_with_stem_exclusion_cjk_analyzer
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "stem_exclusion_cjk_analyzer": {
  7. "type": "cjk",
  8. "stem_exclusion": ["example", "words"]
  9. }
  10. }
  11. }
  12. }
  13. }

copy

CJK analyzer internals

The cjk analyzer is built using the following components:

  • Tokenizer: standard

  • Token filters:

    • cjk_width
    • lowercase
    • cjk_bigram
    • stop (similar to English)

Custom CJK analyzer

You can create a custom CJK analyzer using the following command:

  1. PUT /cjk-index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "english_stop": {
  7. "type": "stop",
  8. "stopwords": [
  9. "a", "and", "are", "as", "at", "be", "but", "by", "for",
  10. "if", "in", "into", "is", "it", "no", "not", "of", "on",
  11. "or", "s", "such", "t", "that", "the", "their", "then",
  12. "there", "these", "they", "this", "to", "was", "will",
  13. "with", "www"
  14. ]
  15. }
  16. },
  17. "analyzer": {
  18. "cjk_custom_analyzer": {
  19. "tokenizer": "standard",
  20. "filter": [
  21. "cjk_width",
  22. "lowercase",
  23. "cjk_bigram",
  24. "english_stop"
  25. ]
  26. }
  27. }
  28. }
  29. },
  30. "mappings": {
  31. "properties": {
  32. "content": {
  33. "type": "text",
  34. "analyzer": "cjk_custom_analyzer"
  35. }
  36. }
  37. }
  38. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /cjk-index/_analyze
  2. {
  3. "field": "content",
  4. "text": "学生们在中国、日本和韩国的大学学习。123456"
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {"token": "学生","start_offset": 0,"end_offset": 2,"type": "<DOUBLE>","position": 0},
  4. {"token": "生们","start_offset": 1,"end_offset": 3,"type": "<DOUBLE>","position": 1},
  5. {"token": "们在","start_offset": 2,"end_offset": 4,"type": "<DOUBLE>","position": 2},
  6. {"token": "在中","start_offset": 3,"end_offset": 5,"type": "<DOUBLE>","position": 3},
  7. {"token": "中国","start_offset": 4,"end_offset": 6,"type": "<DOUBLE>","position": 4},
  8. {"token": "日本","start_offset": 7,"end_offset": 9,"type": "<DOUBLE>","position": 5},
  9. {"token": "本和","start_offset": 8,"end_offset": 10,"type": "<DOUBLE>","position": 6},
  10. {"token": "和韩","start_offset": 9,"end_offset": 11,"type": "<DOUBLE>","position": 7},
  11. {"token": "韩国","start_offset": 10,"end_offset": 12,"type": "<DOUBLE>","position": 8},
  12. {"token": "国的","start_offset": 11,"end_offset": 13,"type": "<DOUBLE>","position": 9},
  13. {"token": "的大","start_offset": 12,"end_offset": 14,"type": "<DOUBLE>","position": 10},
  14. {"token": "大学","start_offset": 13,"end_offset": 15,"type": "<DOUBLE>","position": 11},
  15. {"token": "学学","start_offset": 14,"end_offset": 16,"type": "<DOUBLE>","position": 12},
  16. {"token": "学习","start_offset": 15,"end_offset": 17,"type": "<DOUBLE>","position": 13},
  17. {"token": "123456","start_offset": 18,"end_offset": 24,"type": "<NUM>","position": 14}
  18. ]
  19. }