CJK width token filter

CJK width token filter

Normalizes width differences in CJK (Chinese, Japanese, and Korean) characters as follows:

  • Folds full-width ASCII character variants into the equivalent basic Latin characters
  • Folds half-width Katakana character variants into the equivalent Kana characters

This filter is included in Elasticsearch’s built-in CJK language analyzer. It uses Lucene’s CJKWidthFilter.

This token filter can be viewed as a subset of NFKC/NFKD Unicode normalization. See the analysis-icu plugin for full normalization support.

Example

  1. GET /_analyze
  2. {
  3. "tokenizer" : "standard",
  4. "filter" : ["cjk_width"],
  5. "text" : "シーサイドライナー"
  6. }

The filter produces the following token:

  1. シーサイドライナー

Add to an analyzer

The following create index API request uses the CJK width token filter to configure a new custom analyzer.

  1. PUT /cjk_width_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "standard_cjk_width": {
  7. "tokenizer": "standard",
  8. "filter": [ "cjk_width" ]
  9. }
  10. }
  11. }
  12. }
  13. }