Letter tokenizer

Letter tokenizer

The letter tokenizer breaks text into terms whenever it encounters a character which is not a letter. It does a reasonable job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces.

Example output

  1. resp = client.indices.analyze(
  2. tokenizer="letter",
  3. text="The 2 QUICK Brown-Foxes jumped over the lazy dog's bone.",
  4. )
  5. print(resp)
  1. response = client.indices.analyze(
  2. body: {
  3. tokenizer: 'letter',
  4. text: "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
  5. }
  6. )
  7. puts response
  1. const response = await client.indices.analyze({
  2. tokenizer: "letter",
  3. text: "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone.",
  4. });
  5. console.log(response);
  1. POST _analyze
  2. {
  3. "tokenizer": "letter",
  4. "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
  5. }

The above sentence would produce the following terms:

  1. [ The, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, s, bone ]

Configuration

The letter tokenizer is not configurable.