ML inference search request processor

Introduced 2.16

The ml_inference search request processor is used to invoke registered machine learning (ML) models in order to rewrite queries using the model output.

PREREQUISITE
Before using the ml_inference search request processor, you must have either a local ML model hosted on your OpenSearch cluster or an externally hosted model connected to your OpenSearch cluster through the ML Commons plugin. For more information about local models, see Using ML models within OpenSearch. For more information about externally hosted models, see Connecting to externally hosted models.

Syntax

The following is the syntax for the ml-inference search request processor:

  1. {
  2. "ml_inference": {
  3. "model_id": "<model_id>",
  4. "function_name": "<function_name>",
  5. "full_response_path": "<full_response_path>",
  6. "query_template": "<query_template>",
  7. "model_config": {
  8. "<model_config_field>": "<config_value>"
  9. },
  10. "model_input": "<model_input>",
  11. "input_map": [
  12. {
  13. "<model_input_field>": "<query_input_field>"
  14. }
  15. ],
  16. "output_map": [
  17. {
  18. "<query_output_field>": "<model_output_field>"
  19. }
  20. ]
  21. }
  22. }

copy

Configuration parameters

The following table lists the required and optional parameters for the ml-inference search request processor.

ParameterData typeRequired/OptionalDescription
model_idStringRequiredThe ID of the ML model used by the processor.
query_templateStringOptionalA query string template used to construct a new query containing a new_document_field. Often used when rewriting a search query to a new query type.
function_nameStringOptional for externally hosted models

Required for local models
The function name of the ML model configured in the processor. For local models, valid values are sparse_encoding, sparse_tokenize, text_embedding, and text_similarity. For externally hosted models, valid value is remote. Default is remote.
model_configObjectOptionalCustom configuration options for the ML model. For more information, see The model_config object.
model_inputStringOptional for externally hosted models

Required for local models
A template that defines the input field format expected by the model. Each local model type might use a different set of inputs. For externally hosted models, default is “{ \”parameters\”: ${ml_inference.parameters} }.
input_mapArrayRequiredAn array specifying how to map query string fields to the model input fields. Each element of the array is a map in the “<model_input_field>”: “<query_input_field>” format and corresponds to one model invocation of a document field. If no input mapping is specified for an externally hosted model, then all document fields are passed to the model directly as input. The input_map size indicates the number of times the model is invoked (the number of Predict API requests).
<model_input_field>StringRequiredThe model input field name.
<query_input_field>StringRequiredThe name or JSON path of the query field used as the model input.
output_mapArrayRequiredAn array specifying how to map the model output fields to new fields in the query string. Each element of the array is a map in the “<query_output_field>”: “<model_output_field>” format.
<query_output_field>StringRequiredThe name of the query field in which the model’s output (specified by model_output) is stored.
<model_output_field>StringRequiredThe name or JSON path of the field in the model output to be stored in the query_output_field.
full_response_pathBooleanOptionalSet this parameter to true if the model_output_field contains a full JSON path to the field instead of the field name. The model output will then be fully parsed to get the value of the field. Default is true for local models and false for externally hosted models.
ignore_missingBooleanOptionalIf true and any of the input fields defined in the input_map or output_map are missing, then the missing fields are ignored. Otherwise, a missing field causes a failure. Default is false.
ignore_failureBooleanOptionalSpecifies whether the processor continues execution even if it encounters an error. If true, then any failure is ignored and the search continues. If false, then any failure causes the search to be canceled. Default is false.
max_prediction_tasksIntegerOptionalThe maximum number of concurrent model invocations that can run during query search. Default is 10.
descriptionStringOptionalA brief description of the processor.
tagStringOptionalAn identifier tag for the processor. Useful for debugging to distinguish between processors of the same type.

The input_map and output_map mappings support standard JSON path notation for specifying complex data structures.

Using the processor

Follow these steps to use the processor in a pipeline. You must provide a model ID, input_map, and output_map when creating the processor. Before testing a pipeline using the processor, make sure that the model is successfully deployed. You can check the model state using the Get Model API.

For local models, you must provide a model_input field that specifies the model input format. Add any input fields in model_config to model_input.

For externally hosted models, the model_input field is optional, and its default value is "{ \"parameters\": ${ml_inference.parameters} }.

Setup

Create an index named my_index and index two documents:

  1. POST /my_index/_doc/1
  2. {
  3. "passage_text": "I am excited",
  4. "passage_language": "en",
  5. "label": "POSITIVE",
  6. "passage_embedding": [
  7. 2.3886719,
  8. 0.032714844,
  9. -0.22229004
  10. ...]
  11. }

copy

  1. POST /my_index/_doc/2
  2. {
  3. "passage_text": "I am sad",
  4. "passage_language": "en",
  5. "label": "NEGATIVE",
  6. "passage_embedding": [
  7. 1.7773438,
  8. 0.4309082,
  9. 1.8857422,
  10. 0.95996094,
  11. ...]
  12. }

copy

When you run a term query on the created index without a search pipeline, the query searches for documents that contain the exact term specified in the query. The following query does not return any results because the query text does not match any of the documents in the index:

  1. GET /my_index/_search
  2. {
  3. "query": {
  4. "term": {
  5. "passage_text": {
  6. "value": "happy moments",
  7. "boost": 1
  8. }
  9. }
  10. }
  11. }

By using a model, the search pipeline can dynamically rewrite the term value to enhance or alter the search results based on the model inference. This means the model takes an initial input from the search query, processes it, and then updates the query term to reflect the model inference, potentially improving the relevance of the search results.

Example: Externally hosted model

The following example configures an ml_inference processor with an externally hosted model.

Step 1: Create a pipeline

This example demonstrates how to create a search pipeline for an externally hosted sentiment analysis model that rewrites the term query value. The model requires an inputs field and produces results in a label field. Because the function_name is not specified, it defaults to remote, indicating an externally hosted model.

The term query value is rewritten based on the model’s output. The ml_inference processor in the search request needs an input_map to retrieve the query field value for the model input and an output_map to assign the model output to the query string.

In this example, an ml_inference search request processor is used for the following term query:

  1. {
  2. "query": {
  3. "term": {
  4. "label": {
  5. "value": "happy moments",
  6. "boost": 1
  7. }
  8. }
  9. }
  10. }

The following request creates a search pipeline that rewrites the preceding term query:

  1. PUT /_search/pipeline/ml_inference_pipeline
  2. {
  3. "description": "Generate passage_embedding for searched documents",
  4. "processors": [
  5. {
  6. "ml_inference": {
  7. "model_id": "<your model id>",
  8. "input_map": [
  9. {
  10. "inputs": "query.term.label.value"
  11. }
  12. ],
  13. "output_map": [
  14. {
  15. "query.term.label.value": "label"
  16. }
  17. ]
  18. }
  19. }
  20. ]
  21. }

copy

When making a Predict API request to an externally hosted model, all necessary fields and parameters are usually contained within a parameters object:

  1. POST /_plugins/_ml/models/cleMb4kBJ1eYAeTMFFg4/_predict
  2. {
  3. "parameters": {
  4. "inputs": [
  5. {
  6. ...
  7. }
  8. ]
  9. }
  10. }

Thus, to use an externally hosted sentiment analysis model, send a Predict API request in the following format:

  1. POST /_plugins/_ml/models/cywgD5EB6KAJXDLxyDp1/_predict
  2. {
  3. "parameters": {
  4. "inputs": "happy moments"
  5. }
  6. }

copy

The model processes the input and generates a prediction based on the sentiment of the input text. In this case, the sentiment is positive:

  1. {
  2. "inference_results": [
  3. {
  4. "output": [
  5. {
  6. "name": "response",
  7. "dataAsMap": {
  8. "label": "POSITIVE",
  9. "score": "0.948"
  10. }
  11. }
  12. ],
  13. "status_code": 200
  14. }
  15. ]
  16. }

When specifying the input_map for an externally hosted model, you can directly reference the inputs field instead of providing its dot path parameters.inputs:

  1. "input_map": [
  2. {
  3. "inputs": "query.term.label.value"
  4. }
  5. ]

Step 2: Run the pipeline

Once you have created a search pipeline, you can run the same term query with the search pipeline:

  1. GET /my_index/_search?search_pipeline=my_pipeline_request_review
  2. {
  3. "query": {
  4. "term": {
  5. "label": {
  6. "value": "happy moments",
  7. "boost": 1
  8. }
  9. }
  10. }
  11. }

copy

The query term value is rewritten based on the model’s output. The model determines that the sentiment of the query term is positive, so the rewritten query appears as follows:

  1. {
  2. "query": {
  3. "term": {
  4. "label": {
  5. "value": "POSITIVE",
  6. "boost": 1
  7. }
  8. }
  9. }
  10. }

The response includes the document whose label field has the value POSITIVE:

  1. {
  2. "took": 288,
  3. "timed_out": false,
  4. "_shards": {
  5. "total": 1,
  6. "successful": 1,
  7. "skipped": 0,
  8. "failed": 0
  9. },
  10. "hits": {
  11. "total": {
  12. "value": 1,
  13. "relation": "eq"
  14. },
  15. "max_score": 0.00009405752,
  16. "hits": [
  17. {
  18. "_index": "my_index",
  19. "_id": "3",
  20. "_score": 0.00009405752,
  21. "_source": {
  22. "passage_text": "I am excited",
  23. "passage_language": "en",
  24. "label": "POSITIVE"
  25. }
  26. }
  27. ]
  28. }
  29. }

Example: Local model

The following example shows you how to configure an ml_inference processor with a local model to rewrite a term query into a k-NN query.

Step 1: Create a pipeline

The following example shows you how to create a search pipeline for the huggingface/sentence-transformers/all-distilroberta-v1 local model. The model is a pretrained sentence transformer model hosted in your OpenSearch cluster.

If you invoke the model using the Predict API, then the request appears as follows:

  1. POST /_plugins/_ml/_predict/text_embedding/cleMb4kBJ1eYAeTMFFg4
  2. {
  3. "text_docs": [
  4. "today is sunny"
  5. ],
  6. "return_number": true,
  7. "target_response": [
  8. "sentence_embedding"
  9. ]
  10. }

Using this schema, specify the model_input as follows:

  1. "model_input": "{ \"text_docs\": ${input_map.text_docs}, \"return_number\": ${model_config.return_number}, \"target_response\": ${model_config.target_response} }"

In the input_map, map the query.term.passage_embedding.value query field to the text_docs field expected by the model:

  1. "input_map": [
  2. {
  3. "text_docs": "query.term.passage_embedding.value"
  4. }
  5. ]

Because you specified the field to be converted into embeddings as a JSON path, you need to set the full_response_path to true. Then the full JSON document is parsed in order to obtain the input field:

  1. "full_response_path": true

The text in the query.term.passage_embedding.value field will be used to generate embeddings:

  1. {
  2. "text_docs": "happy passage"
  3. }

The Predict API request returns the following response:

  1. {
  2. "inference_results": [
  3. {
  4. "output": [
  5. {
  6. "name": "sentence_embedding",
  7. "data_type": "FLOAT32",
  8. "shape": [
  9. 768
  10. ],
  11. "data": [
  12. 0.25517133,
  13. -0.28009856,
  14. 0.48519906,
  15. ...
  16. ]
  17. }
  18. ]
  19. }
  20. ]
  21. }

The model generates embeddings in the $.inference_results.*.output.*.data field. The output_map maps this field to the query field in the query template:

  1. "output_map": [
  2. {
  3. "modelPredictionOutcome": "$.inference_results.*.output.*.data"
  4. }
  5. ]

To configure an ml_inference search request processor with a local model, specify the function_name explicitly. In this example, the function_name is text_embedding. For information about valid function_name values, see Configuration parameters.

The following is the final configuration of the ml_inference processor with the local model:

  1. PUT /_search/pipeline/ml_inference_pipeline_local
  2. {
  3. "description": "searchs reviews and generates embeddings",
  4. "processors": [
  5. {
  6. "ml_inference": {
  7. "function_name": "text_embedding",
  8. "full_response_path": true,
  9. "model_id": "<your model id>",
  10. "model_config": {
  11. "return_number": true,
  12. "target_response": [
  13. "sentence_embedding"
  14. ]
  15. },
  16. "model_input": "{ \"text_docs\": ${input_map.text_docs}, \"return_number\": ${model_config.return_number}, \"target_response\": ${model_config.target_response} }",
  17. "query_template": """{
  18. "size": 2,
  19. "query": {
  20. "knn": {
  21. "passage_embedding": {
  22. "vector": ${modelPredictionOutcome},
  23. "k": 5
  24. }
  25. }
  26. }
  27. }""",
  28. "input_map": [
  29. {
  30. "text_docs": "query.term.passage_embedding.value"
  31. }
  32. ],
  33. "output_map": [
  34. {
  35. "modelPredictionOutcome": "$.inference_results.*.output.*.data"
  36. }
  37. ],
  38. "ignore_missing": true,
  39. "ignore_failure": true
  40. }
  41. }
  42. ]
  43. }

copy

Step 2: Run the pipeline

Run the following query, providing the pipeline name in the request:

  1. GET /my_index/_search?search_pipeline=ml_inference_pipeline_local
  2. {
  3. "query": {
  4. "term": {
  5. "passage_embedding": {
  6. "value": "happy passage"
  7. }
  8. }
  9. }
  10. }

copy

The response confirms that the processor ran a k-NN query, which returned document 1 with a higher score:

  1. {
  2. "took": 288,
  3. "timed_out": false,
  4. "_shards": {
  5. "total": 1,
  6. "successful": 1,
  7. "skipped": 0,
  8. "failed": 0
  9. },
  10. "hits": {
  11. "total": {
  12. "value": 2,
  13. "relation": "eq"
  14. },
  15. "max_score": 0.00009405752,
  16. "hits": [
  17. {
  18. "_index": "my_index",
  19. "_id": "1",
  20. "_score": 0.00009405752,
  21. "_source": {
  22. "passage_text": "I am excited",
  23. "passage_language": "en",
  24. "label": "POSITIVE",
  25. "passage_embedding": [
  26. 2.3886719,
  27. 0.032714844,
  28. -0.22229004
  29. ...]
  30. }
  31. },
  32. {
  33. "_index": "my_index",
  34. "_id": "2",
  35. "_score": 0.00001405052,
  36. "_source": {
  37. "passage_text": "I am sad",
  38. "passage_language": "en",
  39. "label": "NEGATIVE",
  40. "passage_embedding": [
  41. 1.7773438,
  42. 0.4309082,
  43. 1.8857422,
  44. 0.95996094,
  45. ...
  46. ]
  47. }
  48. }
  49. ]
  50. }
  51. }