Trim token filter

Trim token filter

Removes leading and trailing whitespace from each token in a stream. While this can change the length of a token, the trim filter does not change a token’s offsets.

The trim filter uses Lucene’s TrimFilter.

Many commonly used tokenizers, such as the standard or whitespace tokenizer, remove whitespace by default. When using these tokenizers, you don’t need to add a separate trim filter.

Example

To see how the trim filter works, you first need to produce a token containing whitespace.

The following analyze API request uses the keyword tokenizer to produce a token for " fox ".

  1. resp = client.indices.analyze(
  2. tokenizer="keyword",
  3. text=" fox ",
  4. )
  5. print(resp)
  1. response = client.indices.analyze(
  2. body: {
  3. tokenizer: 'keyword',
  4. text: ' fox '
  5. }
  6. )
  7. puts response
  1. const response = await client.indices.analyze({
  2. tokenizer: "keyword",
  3. text: " fox ",
  4. });
  5. console.log(response);
  1. GET _analyze
  2. {
  3. "tokenizer" : "keyword",
  4. "text" : " fox "
  5. }

The API returns the following response. Note the " fox " token contains the original text’s whitespace. Note that despite changing the token’s length, the start_offset and end_offset remain the same.

  1. {
  2. "tokens": [
  3. {
  4. "token": " fox ",
  5. "start_offset": 0,
  6. "end_offset": 5,
  7. "type": "word",
  8. "position": 0
  9. }
  10. ]
  11. }

To remove the whitespace, add the trim filter to the previous analyze API request.

  1. resp = client.indices.analyze(
  2. tokenizer="keyword",
  3. filter=[
  4. "trim"
  5. ],
  6. text=" fox ",
  7. )
  8. print(resp)
  1. response = client.indices.analyze(
  2. body: {
  3. tokenizer: 'keyword',
  4. filter: [
  5. 'trim'
  6. ],
  7. text: ' fox '
  8. }
  9. )
  10. puts response
  1. const response = await client.indices.analyze({
  2. tokenizer: "keyword",
  3. filter: ["trim"],
  4. text: " fox ",
  5. });
  6. console.log(response);
  1. GET _analyze
  2. {
  3. "tokenizer" : "keyword",
  4. "filter" : ["trim"],
  5. "text" : " fox "
  6. }

The API returns the following response. The returned fox token does not include any leading or trailing whitespace.

  1. {
  2. "tokens": [
  3. {
  4. "token": "fox",
  5. "start_offset": 0,
  6. "end_offset": 5,
  7. "type": "word",
  8. "position": 0
  9. }
  10. ]
  11. }

Add to an analyzer

The following create index API request uses the trim filter to configure a new custom analyzer.

  1. resp = client.indices.create(
  2. index="trim_example",
  3. settings={
  4. "analysis": {
  5. "analyzer": {
  6. "keyword_trim": {
  7. "tokenizer": "keyword",
  8. "filter": [
  9. "trim"
  10. ]
  11. }
  12. }
  13. }
  14. },
  15. )
  16. print(resp)
  1. response = client.indices.create(
  2. index: 'trim_example',
  3. body: {
  4. settings: {
  5. analysis: {
  6. analyzer: {
  7. keyword_trim: {
  8. tokenizer: 'keyword',
  9. filter: [
  10. 'trim'
  11. ]
  12. }
  13. }
  14. }
  15. }
  16. }
  17. )
  18. puts response
  1. const response = await client.indices.create({
  2. index: "trim_example",
  3. settings: {
  4. analysis: {
  5. analyzer: {
  6. keyword_trim: {
  7. tokenizer: "keyword",
  8. filter: ["trim"],
  9. },
  10. },
  11. },
  12. },
  13. });
  14. console.log(response);
  1. PUT trim_example
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "keyword_trim": {
  7. "tokenizer": "keyword",
  8. "filter": [ "trim" ]
  9. }
  10. }
  11. }
  12. }
  13. }