Normalization token filter
The normalization
token filter is designed to adjust and simplify text in a way that reduces variations, particularly variations in special characters. It is primarily used to handle variations in writing by standardizing characters in specific languages.
The following normalization
token filters are available:
- arabic_normalization
- german_normalization
- hindi_normalization
- indic_normalization
- sorani_normalization
- persian_normalization
- scandinavian_normalization
- scandinavian_folding
- serbian_normalization
Example
The following example request creates a new index named german_normalizer_example
and configures an analyzer with a german_normalization
filter:
PUT /german_normalizer_example
{
"settings": {
"analysis": {
"filter": {
"german_normalizer": {
"type": "german_normalization"
}
},
"analyzer": {
"german_normalizer_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"german_normalizer"
]
}
}
}
}
}
copy
Generated tokens
Use the following request to examine the tokens generated using the analyzer:
POST /german_normalizer_example/_analyze
{
"text": "Straße München",
"analyzer": "german_normalizer_analyzer"
}
copy
The response contains the generated tokens:
{
"tokens": [
{
"token": "strasse",
"start_offset": 0,
"end_offset": 6,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "munchen",
"start_offset": 7,
"end_offset": 14,
"type": "<ALPHANUM>",
"position": 1
}
]
}
当前内容版权归 OpenSearch 或其关联方所有,如需对内容或内容相关联开源项目进行关注与资助,请访问 OpenSearch .