Pattern tokenizer

Pattern tokenizer

The pattern tokenizer uses a regular expression to either split text into terms whenever it matches a word separator, or to capture matching text as terms.

The default pattern is \W+, which splits text whenever it encounters non-word characters.

Beware of Pathological Regular Expressions

The pattern tokenizer uses Java Regular Expressions.

A badly written regular expression could run very slowly or even throw a StackOverflowError and cause the node it is running on to exit suddenly.

Read more about pathological regular expressions and how to avoid them.

Example output

  1. resp = client.indices.analyze(
  2. tokenizer="pattern",
  3. text="The foo_bar_size's default is 5.",
  4. )
  5. print(resp)
  1. response = client.indices.analyze(
  2. body: {
  3. tokenizer: 'pattern',
  4. text: "The foo_bar_size's default is 5."
  5. }
  6. )
  7. puts response
  1. const response = await client.indices.analyze({
  2. tokenizer: "pattern",
  3. text: "The foo_bar_size's default is 5.",
  4. });
  5. console.log(response);
  1. POST _analyze
  2. {
  3. "tokenizer": "pattern",
  4. "text": "The foo_bar_size's default is 5."
  5. }

The above sentence would produce the following terms:

  1. [ The, foo_bar_size, s, default, is, 5 ]

Configuration

The pattern tokenizer accepts the following parameters:

pattern

A Java regular expression, defaults to \W+.

flags

Java regular expression flags. Flags should be pipe-separated, eg “CASE_INSENSITIVE|COMMENTS”.

group

Which capture group to extract as tokens. Defaults to -1 (split).

Example configuration

In this example, we configure the pattern tokenizer to break text into tokens when it encounters commas:

  1. resp = client.indices.create(
  2. index="my-index-000001",
  3. settings={
  4. "analysis": {
  5. "analyzer": {
  6. "my_analyzer": {
  7. "tokenizer": "my_tokenizer"
  8. }
  9. },
  10. "tokenizer": {
  11. "my_tokenizer": {
  12. "type": "pattern",
  13. "pattern": ","
  14. }
  15. }
  16. }
  17. },
  18. )
  19. print(resp)
  20. resp1 = client.indices.analyze(
  21. index="my-index-000001",
  22. analyzer="my_analyzer",
  23. text="comma,separated,values",
  24. )
  25. print(resp1)
  1. response = client.indices.create(
  2. index: 'my-index-000001',
  3. body: {
  4. settings: {
  5. analysis: {
  6. analyzer: {
  7. my_analyzer: {
  8. tokenizer: 'my_tokenizer'
  9. }
  10. },
  11. tokenizer: {
  12. my_tokenizer: {
  13. type: 'pattern',
  14. pattern: ','
  15. }
  16. }
  17. }
  18. }
  19. }
  20. )
  21. puts response
  22. response = client.indices.analyze(
  23. index: 'my-index-000001',
  24. body: {
  25. analyzer: 'my_analyzer',
  26. text: 'comma,separated,values'
  27. }
  28. )
  29. puts response
  1. const response = await client.indices.create({
  2. index: "my-index-000001",
  3. settings: {
  4. analysis: {
  5. analyzer: {
  6. my_analyzer: {
  7. tokenizer: "my_tokenizer",
  8. },
  9. },
  10. tokenizer: {
  11. my_tokenizer: {
  12. type: "pattern",
  13. pattern: ",",
  14. },
  15. },
  16. },
  17. },
  18. });
  19. console.log(response);
  20. const response1 = await client.indices.analyze({
  21. index: "my-index-000001",
  22. analyzer: "my_analyzer",
  23. text: "comma,separated,values",
  24. });
  25. console.log(response1);
  1. PUT my-index-000001
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "my_analyzer": {
  7. "tokenizer": "my_tokenizer"
  8. }
  9. },
  10. "tokenizer": {
  11. "my_tokenizer": {
  12. "type": "pattern",
  13. "pattern": ","
  14. }
  15. }
  16. }
  17. }
  18. }
  19. POST my-index-000001/_analyze
  20. {
  21. "analyzer": "my_analyzer",
  22. "text": "comma,separated,values"
  23. }

The above example produces the following terms:

  1. [ comma, separated, values ]

In the next example, we configure the pattern tokenizer to capture values enclosed in double quotes (ignoring embedded escaped quotes \"). The regex itself looks like this:

  1. "((?:\\"|[^"]|\\")*)"

And reads as follows:

  • A literal "
  • Start capturing:

    • A literal \" OR any character except "
    • Repeat until no more characters match
  • A literal closing "

When the pattern is specified in JSON, the " and \ characters need to be escaped, so the pattern ends up looking like:

  1. \"((?:\\\\\"|[^\"]|\\\\\")+)\"
  1. resp = client.indices.create(
  2. index="my-index-000001",
  3. settings={
  4. "analysis": {
  5. "analyzer": {
  6. "my_analyzer": {
  7. "tokenizer": "my_tokenizer"
  8. }
  9. },
  10. "tokenizer": {
  11. "my_tokenizer": {
  12. "type": "pattern",
  13. "pattern": "\"((?:\\\\\"|[^\"]|\\\\\")+)\"",
  14. "group": 1
  15. }
  16. }
  17. }
  18. },
  19. )
  20. print(resp)
  21. resp1 = client.indices.analyze(
  22. index="my-index-000001",
  23. analyzer="my_analyzer",
  24. text="\"value\", \"value with embedded \\\" quote\"",
  25. )
  26. print(resp1)
  1. response = client.indices.create(
  2. index: 'my-index-000001',
  3. body: {
  4. settings: {
  5. analysis: {
  6. analyzer: {
  7. my_analyzer: {
  8. tokenizer: 'my_tokenizer'
  9. }
  10. },
  11. tokenizer: {
  12. my_tokenizer: {
  13. type: 'pattern',
  14. pattern: '"((?:\\\"|[^"]|\\\")+)"',
  15. group: 1
  16. }
  17. }
  18. }
  19. }
  20. }
  21. )
  22. puts response
  23. response = client.indices.analyze(
  24. index: 'my-index-000001',
  25. body: {
  26. analyzer: 'my_analyzer',
  27. text: '"value", "value with embedded \" quote"'
  28. }
  29. )
  30. puts response
  1. const response = await client.indices.create({
  2. index: "my-index-000001",
  3. settings: {
  4. analysis: {
  5. analyzer: {
  6. my_analyzer: {
  7. tokenizer: "my_tokenizer",
  8. },
  9. },
  10. tokenizer: {
  11. my_tokenizer: {
  12. type: "pattern",
  13. pattern: '"((?:\\\\"|[^"]|\\\\")+)"',
  14. group: 1,
  15. },
  16. },
  17. },
  18. },
  19. });
  20. console.log(response);
  21. const response1 = await client.indices.analyze({
  22. index: "my-index-000001",
  23. analyzer: "my_analyzer",
  24. text: '"value", "value with embedded \\" quote"',
  25. });
  26. console.log(response1);
  1. PUT my-index-000001
  2. {
  3. "settings": {
  4. "analysis": {
  5. "analyzer": {
  6. "my_analyzer": {
  7. "tokenizer": "my_tokenizer"
  8. }
  9. },
  10. "tokenizer": {
  11. "my_tokenizer": {
  12. "type": "pattern",
  13. "pattern": "\"((?:\\\\\"|[^\"]|\\\\\")+)\"",
  14. "group": 1
  15. }
  16. }
  17. }
  18. }
  19. }
  20. POST my-index-000001/_analyze
  21. {
  22. "analyzer": "my_analyzer",
  23. "text": "\"value\", \"value with embedded \\\" quote\""
  24. }

The above example produces the following two terms:

  1. [ value, value with embedded \" quote ]