N-gram token filter

N-gram token filter

Forms n-grams of specified lengths from a token.

For example, you can use the ngram token filter to change fox to [ f, fo, o, ox, x ].

This filter uses Lucene’s NGramTokenFilter.

The ngram filter is similar to the edge_ngram token filter. However, the edge_ngram only outputs n-grams that start at the beginning of a token.

Example

The following analyze API request uses the ngram filter to convert Quick fox to 1-character and 2-character n-grams:

  1. resp = client.indices.analyze(
  2. tokenizer="standard",
  3. filter=[
  4. "ngram"
  5. ],
  6. text="Quick fox",
  7. )
  8. print(resp)
  1. response = client.indices.analyze(
  2. body: {
  3. tokenizer: 'standard',
  4. filter: [
  5. 'ngram'
  6. ],
  7. text: 'Quick fox'
  8. }
  9. )
  10. puts response
  1. const response = await client.indices.analyze({
  2. tokenizer: "standard",
  3. filter: ["ngram"],
  4. text: "Quick fox",
  5. });
  6. console.log(response);
  1. GET _analyze
  2. {
  3. "tokenizer": "standard",
  4. "filter": [ "ngram" ],
  5. "text": "Quick fox"
  6. }

The filter produces the following tokens:

  1. [ Q, Qu, u, ui, i, ic, c, ck, k, f, fo, o, ox, x ]

Add to an analyzer

The following create index API request uses the ngram filter to configure a new custom analyzer.

  1. resp = client.indices.create(
  2. index="ngram_example",
  3. settings={
  4. "analysis": {
  5. "analyzer": {
  6. "standard_ngram": {
  7. "tokenizer": "standard",
  8. "filter": [
  9. "ngram"
  10. ]
  11. }
  12. }
  13. }
  14. },
  15. )
  16. print(resp)
  1. response = client.indices.create(
  2. index: 'ngram_example',
  3. body: {
  4. settings: {
  5. analysis: {
  6. analyzer: {
  7. standard_ngram: {
  8. tokenizer: 'standard',
  9. filter: [
  10. 'ngram'
  11. ]
  12. }
  13. }
  14. }
  15. }
  16. }
  17. )
  18. puts response
  1. const response = await client.indices.create({
  2. index: "ngram_example",
  3. settings: {
  4. analysis: {
  5. analyzer: {
  6. standard_ngram: {
  7. tokenizer: "standard",
  8. filter: ["ngram"],
  9. },
  10. },
  11. },
  12. },
  13. });
  14. console.log(response);
  1. PUT ngram_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "standard_ngram": {
  7. "tokenizer": "standard",
  8. "filter": [ "ngram" ]
  9. }
  10. }
  11. }
  12. }
  13. }

Configurable parameters

max_gram

(Optional, integer) Maximum length of characters in a gram. Defaults to 2.

min_gram

(Optional, integer) Minimum length of characters in a gram. Defaults to 1.

preserve_original

(Optional, Boolean) Emits original token when set to true. Defaults to false.

You can use the index.max_ngram_diff index-level setting to control the maximum allowed difference between the max_gram and min_gram values.

Customize

To customize the ngram filter, duplicate it to create the basis for a new custom token filter. You can modify the filter using its configurable parameters.

For example, the following request creates a custom ngram filter that forms n-grams between 3-5 characters. The request also increases the index.max_ngram_diff setting to 2.

  1. resp = client.indices.create(
  2. index="ngram_custom_example",
  3. settings={
  4. "index": {
  5. "max_ngram_diff": 2
  6. },
  7. "analysis": {
  8. "analyzer": {
  9. "default": {
  10. "tokenizer": "whitespace",
  11. "filter": [
  12. "3_5_grams"
  13. ]
  14. }
  15. },
  16. "filter": {
  17. "3_5_grams": {
  18. "type": "ngram",
  19. "min_gram": 3,
  20. "max_gram": 5
  21. }
  22. }
  23. }
  24. },
  25. )
  26. print(resp)
  1. response = client.indices.create(
  2. index: 'ngram_custom_example',
  3. body: {
  4. settings: {
  5. index: {
  6. max_ngram_diff: 2
  7. },
  8. analysis: {
  9. analyzer: {
  10. default: {
  11. tokenizer: 'whitespace',
  12. filter: [
  13. '3_5_grams'
  14. ]
  15. }
  16. },
  17. filter: {
  18. "3_5_grams": {
  19. type: 'ngram',
  20. min_gram: 3,
  21. max_gram: 5
  22. }
  23. }
  24. }
  25. }
  26. }
  27. )
  28. puts response
  1. const response = await client.indices.create({
  2. index: "ngram_custom_example",
  3. settings: {
  4. index: {
  5. max_ngram_diff: 2,
  6. },
  7. analysis: {
  8. analyzer: {
  9. default: {
  10. tokenizer: "whitespace",
  11. filter: ["3_5_grams"],
  12. },
  13. },
  14. filter: {
  15. "3_5_grams": {
  16. type: "ngram",
  17. min_gram: 3,
  18. max_gram: 5,
  19. },
  20. },
  21. },
  22. },
  23. });
  24. console.log(response);
  1. PUT ngram_custom_example
  2. {
  3. "settings": {
  4. "index": {
  5. "max_ngram_diff": 2
  6. },
  7. "analysis": {
  8. "analyzer": {
  9. "default": {
  10. "tokenizer": "whitespace",
  11. "filter": [ "3_5_grams" ]
  12. }
  13. },
  14. "filter": {
  15. "3_5_grams": {
  16. "type": "ngram",
  17. "min_gram": 3,
  18. "max_gram": 5
  19. }
  20. }
  21. }
  22. }
  23. }