English analyzer

The built-in english analyzer can be applied to a text field using the following command:

  1. PUT /english-index
  2. {
  3. "mappings": {
  4. "properties": {
  5. "content": {
  6. "type": "text",
  7. "analyzer": "english"
  8. }
  9. }
  10. }
  11. }

copy

Stem exclusion

You can use stem_exclusion with this language analyzer using the following command:

  1. PUT index_with_stem_exclusion_english_analyzer
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "stem_exclusion_english_analyzer": {
  7. "type": "english",
  8. "stem_exclusion": ["authority", "authorization"]
  9. }
  10. }
  11. }
  12. }
  13. }

copy

English analyzer internals

The english analyzer is built using the following components:

  • Tokenizer: standard

  • Token filters:

    • stemmer (possessive_english)
    • lowercase
    • stop (English)
    • keyword
    • stemmer (English)

Custom English analyzer

You can create a custom English analyzer using the following command:

  1. PUT /english-index
  2. {
  3. "settings": {
  4. "analysis": {
  5. "filter": {
  6. "english_stop": {
  7. "type": "stop",
  8. "stopwords": "_english_"
  9. },
  10. "english_stemmer": {
  11. "type": "stemmer",
  12. "language": "english"
  13. },
  14. "english_keywords": {
  15. "type": "keyword_marker",
  16. "keywords": []
  17. },
  18. "english_possessive_stemmer": {
  19. "type": "stemmer",
  20. "language": "possessive_english"
  21. }
  22. },
  23. "analyzer": {
  24. "english_analyzer": {
  25. "type": "custom",
  26. "tokenizer": "standard",
  27. "filter": [
  28. "english_possessive_stemmer",
  29. "lowercase",
  30. "english_stop",
  31. "english_keywords",
  32. "english_stemmer"
  33. ]
  34. }
  35. }
  36. }
  37. },
  38. "mappings": {
  39. "properties": {
  40. "content": {
  41. "type": "text",
  42. "analyzer": "english_analyzer"
  43. }
  44. }
  45. }
  46. }

copy

Generated tokens

Use the following request to examine the tokens generated using the analyzer:

  1. POST /english-index/_analyze
  2. {
  3. "field": "content",
  4. "text": "The students study in the USA and work at NASA. Their numbers are 123456."
  5. }

copy

The response contains the generated tokens:

  1. {
  2. "tokens": [
  3. {"token": "student","start_offset": 4,"end_offset": 12,"type": "<ALPHANUM>","position": 1},
  4. {"token": "studi","start_offset": 13,"end_offset": 18,"type": "<ALPHANUM>","position": 2},
  5. {"token": "usa","start_offset": 26,"end_offset": 29,"type": "<ALPHANUM>","position": 5},
  6. {"token": "work","start_offset": 34,"end_offset": 38,"type": "<ALPHANUM>","position": 7},
  7. {"token": "nasa","start_offset": 42,"end_offset": 46,"type": "<ALPHANUM>","position": 9},
  8. {"token": "number","start_offset": 54,"end_offset": 61,"type": "<ALPHANUM>","position": 11},
  9. {"token": "123456","start_offset": 66,"end_offset": 72,"type": "<NUM>","position": 13}
  10. ]
  11. }