Letter tokenizer
Letter tokenizer
The letter
tokenizer breaks text into terms whenever it encounters a character which is not a letter. It does a reasonable job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces.
Example output
resp = client.indices.analyze(
tokenizer="letter",
text="The 2 QUICK Brown-Foxes jumped over the lazy dog's bone.",
)
print(resp)
response = client.indices.analyze(
body: {
tokenizer: 'letter',
text: "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
)
puts response
const response = await client.indices.analyze({
tokenizer: "letter",
text: "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone.",
});
console.log(response);
POST _analyze
{
"tokenizer": "letter",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
The above sentence would produce the following terms:
[ The, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, s, bone ]
Configuration
The letter
tokenizer is not configurable.
当前内容版权归 elasticsearch 或其关联方所有,如需对内容或内容相关联开源项目进行关注与资助,请访问 elasticsearch .