OpenSearch-provided pretrained models

Introduced 2.9

OpenSearch provides a variety of open-source pretrained models that can assist with a range of machine learning (ML) search and analytics use cases. You can upload any supported model to the OpenSearch cluster and use it locally.

Supported pretrained models

OpenSearch supports the following models, categorized by type. Text embedding models are sourced from Hugging Face. Sparse encoding models are trained by OpenSearch. Although models with the same type will have similar use cases, each model has a different model size and will perform differently depending on your cluster setup. For a performance comparison of some pretrained models, see the SBERT documentation.

Running local models on the CentOS 7 operating system is not supported. Moreover, not all local models can run on all hardware and operating systems.

Sentence transformers

Sentence transformer models map sentences and paragraphs across a dimensional dense vector space. The number of vectors depends on the type of model. You can use these models for use cases such as clustering or semantic search.

The following table provides a list of sentence transformer models and artifact links you can use to download them. Note that you must prefix the model name with huggingface/, as shown in the Model name column.

Model nameVersionVector dimensionsAuto-truncationTorchScript artifactONNX artifact
huggingface/sentence-transformers/all-distilroberta-v11.0.1768-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/all-MiniLM-L6-v21.0.1384-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/all-MiniLM-L12-v21.0.1384-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/all-mpnet-base-v21.0.1768-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/msmarco-distilbert-base-tas-b1.0.2768-dimensional dense vector space. Optimized for semantic search.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/multi-qa-MiniLM-L6-cos-v11.0.1384-dimensional dense vector space. Designed for semantic search and trained on 215 million question/answer pairs.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/multi-qa-mpnet-base-dot-v11.0.1384-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/paraphrase-MiniLM-L3-v21.0.1384-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v21.0.1384-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/paraphrase-mpnet-base-v21.0.0768-dimensional dense vector space.Yes- model_url
- config_url
- model_url
- config_url
huggingface/sentence-transformers/distiluse-base-multilingual-cased-v11.0.1512-dimensional dense vector space.Yes- model_url
- config_url
Not available

Sparse encoding models

Introduced 2.11

Sparse encoding models transfer text into a sparse vector and convert the vector to a list of <token: weight> pairs representing the text entry and its corresponding weight in the sparse vector. You can use these models for use cases such as clustering or sparse neural search.

We recommend the following combinations for optimal performance:

  • Use the amazon/neural-sparse/opensearch-neural-sparse-encoding-v2-distill model during both ingestion and search.
  • Use the amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v2-distill model during ingestion and the amazon/neural-sparse/opensearch-neural-sparse-tokenizer-v1 tokenizer during search.

For more information about the preceding options for running neural sparse search, see Generating sparse vector embeddings within OpenSearch.

The following table provides a list of sparse encoding models and artifact links you can use to download them.

Model nameVersionAuto-truncationTorchScript artifactDescription
amazon/neural-sparse/opensearch-neural-sparse-encoding-v11.0.1Yes- model_url
- config_url
A neural sparse encoding model. The model transforms text into a sparse vector, identifies the indices of non-zero elements in the vector, and then converts the vector into <entry, weight> pairs, where each entry corresponds to a non-zero element index. To experiment with this model using transformers and the PyTorch API, see the Hugging Face documentation.
amazon/neural-sparse/opensearch-neural-sparse-encoding-v2-distill1.0.0Yes- model_url
- config_url
A neural sparse encoding model. The model transforms text into a sparse vector, identifies the indices of non-zero elements in the vector, and then converts the vector into <entry, weight> pairs, where each entry corresponds to a non-zero element index. To experiment with this model using transformers and the PyTorch API, see the Hugging Face documentation.
amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v11.0.1Yes- model_url
- config_url
A neural sparse encoding model. The model transforms text into a sparse vector, identifies the indices of non-zero elements in the vector, and then converts the vector into <entry, weight> pairs, where each entry corresponds to a non-zero element index. To experiment with this model using transformers and the PyTorch API, see the Hugging Face documentation.
amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v2-distill1.0.0Yes- model_url
- config_url
A neural sparse encoding model. The model transforms text into a sparse vector, identifies the indices of non-zero elements in the vector, and then converts the vector into <entry, weight> pairs, where each entry corresponds to a non-zero element index. To experiment with this model using transformers and the PyTorch API, see the Hugging Face documentation.
amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v2-mini1.0.0Yes- model_url
- config_url
A neural sparse encoding model. The model transforms text into a sparse vector, identifies the indices of non-zero elements in the vector, and then converts the vector into <entry, weight> pairs, where each entry corresponds to a non-zero element index. To experiment with this model using transformers and the PyTorch API, see the Hugging Face documentation.
amazon/neural-sparse/opensearch-neural-sparse-tokenizer-v11.0.1Yes- model_url
- config_url
A neural sparse tokenizer. The tokenizer splits text into tokens and assigns each token a predefined weight, which is the token’s inverse document frequency (IDF). If the IDF file is not provided, the weight defaults to 1. For more information, see Preparing a model.

Cross-encoder models

Introduced 2.12

Cross-encoder models support query reranking.

The following table provides a list of cross-encoder models and artifact links you can use to download them. Note that you must prefix the model name with huggingface/cross-encoders, as shown in the Model name column.

Model nameVersionTorchScript artifactONNX artifact
huggingface/cross-encoders/ms-marco-MiniLM-L-6-v21.0.2- model_url
- config_url
- model_url
- config_url
huggingface/cross-encoders/ms-marco-MiniLM-L-12-v21.0.2- model_url
- config_url
- model_url
- config_url

Prerequisites

On clusters with dedicated ML nodes, specify "only_run_on_ml_node": "true" for improved performance. For more information, see ML Commons cluster settings.

This example uses a simple setup with no dedicated ML nodes and allows running a model on a non-ML node. To ensure that this basic local setup works, specify the following cluster settings:

  1. PUT _cluster/settings
  2. {
  3. "persistent": {
  4. "plugins.ml_commons.only_run_on_ml_node": "false",
  5. "plugins.ml_commons.model_access_control_enabled": "true",
  6. "plugins.ml_commons.native_memory_threshold": "99"
  7. }
  8. }

copy

Step 1: Register a model group

To register a model, you have the following options:

  • You can use model_group_id to register a model version to an existing model group.
  • If you do not use model_group_id, ML Commons creates a model with a new model group.

To register a model group, send the following request:

  1. POST /_plugins/_ml/model_groups/_register
  2. {
  3. "name": "local_model_group",
  4. "description": "A model group for local models"
  5. }

copy

The response contains the model group ID that you’ll use to register a model to this model group:

  1. {
  2. "model_group_id": "wlcnb4kBJ1eYAeTMHlV6",
  3. "status": "CREATED"
  4. }

To learn more about model groups, see Model access control.

Step 2: Register a local OpenSearch-provided model

To register an OpenSearch-provided model to the model group created in step 1, provide the model group ID from step 1 in the following request.

Because pretrained models originate from the ML Commons model repository, you only need to provide the name, version, model_group_id, and model_format in the register API request:

  1. POST /_plugins/_ml/models/_register
  2. {
  3. "name": "huggingface/sentence-transformers/msmarco-distilbert-base-tas-b",
  4. "version": "1.0.2",
  5. "model_group_id": "Z1eQf4oB5Vm0Tdw8EIP2",
  6. "model_format": "TORCH_SCRIPT"
  7. }

copy

OpenSearch returns the task ID of the register operation:

  1. {
  2. "task_id": "cVeMb4kBJ1eYAeTMFFgj",
  3. "status": "CREATED"
  4. }

To check the status of the operation, provide the task ID to the Tasks API:

  1. GET /_plugins/_ml/tasks/cVeMb4kBJ1eYAeTMFFgj

copy

When the operation is complete, the state changes to COMPLETED:

  1. {
  2. "model_id": "cleMb4kBJ1eYAeTMFFg4",
  3. "task_type": "REGISTER_MODEL",
  4. "function_name": "TEXT_EMBEDDING",
  5. "state": "COMPLETED",
  6. "worker_node": [
  7. "XPcXLV7RQoi5m8NI_jEOVQ"
  8. ],
  9. "create_time": 1689793598499,
  10. "last_update_time": 1689793598530,
  11. "is_async": false
  12. }

Take note of the returned model_id because you’ll need it to deploy the model.

Step 3: Deploy the model

The deploy operation reads the model’s chunks from the model index and then creates an instance of the model to load into memory. The bigger the model, the more chunks the model is split into and longer it takes for the model to load into memory.

To deploy the registered model, provide its model ID from step 3 in the following request:

  1. POST /_plugins/_ml/models/cleMb4kBJ1eYAeTMFFg4/_deploy

copy

The response contains the task ID that you can use to check the status of the deploy operation:

  1. {
  2. "task_id": "vVePb4kBJ1eYAeTM7ljG",
  3. "status": "CREATED"
  4. }

As in the previous step, check the status of the operation by calling the Tasks API:

  1. GET /_plugins/_ml/tasks/vVePb4kBJ1eYAeTM7ljG

copy

When the operation is complete, the state changes to COMPLETED:

  1. {
  2. "model_id": "cleMb4kBJ1eYAeTMFFg4",
  3. "task_type": "DEPLOY_MODEL",
  4. "function_name": "TEXT_EMBEDDING",
  5. "state": "COMPLETED",
  6. "worker_node": [
  7. "n-72khvBTBi3bnIIR8FTTw"
  8. ],
  9. "create_time": 1689793851077,
  10. "last_update_time": 1689793851101,
  11. "is_async": true
  12. }

If a cluster or node is restarted, then you need to redeploy the model. To learn how to set up automatic redeployment, see Enable auto redeploy.

Step 4 (Optional): Test the model

Use the Predict API to test the model.

Text embedding model

For a text embedding model, send the following request:

  1. POST /_plugins/_ml/_predict/text_embedding/cleMb4kBJ1eYAeTMFFg4
  2. {
  3. "text_docs":[ "today is sunny"],
  4. "return_number": true,
  5. "target_response": ["sentence_embedding"]
  6. }

copy

The response contains text embeddings for the provided sentence:

  1. {
  2. "inference_results" : [
  3. {
  4. "output" : [
  5. {
  6. "name" : "sentence_embedding",
  7. "data_type" : "FLOAT32",
  8. "shape" : [
  9. 768
  10. ],
  11. "data" : [
  12. 0.25517133,
  13. -0.28009856,
  14. 0.48519906,
  15. ...
  16. ]
  17. }
  18. ]
  19. }
  20. ]
  21. }

Sparse encoding model

For a sparse encoding model, send the following request:

  1. POST /_plugins/_ml/_predict/sparse_encoding/cleMb4kBJ1eYAeTMFFg4
  2. {
  3. "text_docs":[ "today is sunny"]
  4. }

copy

The response contains the tokens and weights:

  1. {
  2. "inference_results": [
  3. {
  4. "output": [
  5. {
  6. "name": "output",
  7. "dataAsMap": {
  8. "response": [
  9. {
  10. "saturday": 0.48336542,
  11. "week": 0.1034762,
  12. "mood": 0.09698499,
  13. "sunshine": 0.5738209,
  14. "bright": 0.1756877,
  15. ...
  16. }
  17. }
  18. }
  19. }
  20. }

Cross-encoder model

For a cross-encoder model, send the following request:

  1. POST _plugins/_ml/models/<model_id>/_predict
  2. {
  3. "query_text": "today is sunny",
  4. "text_docs": [
  5. "how are you",
  6. "today is sunny",
  7. "today is july fifth",
  8. "it is winter"
  9. ]
  10. }

copy

The model calculates the similarity score of query_text and each document in text_docs and returns a list of scores for each document in the order they were provided in text_docs:

  1. {
  2. "inference_results": [
  3. {
  4. "output": [
  5. {
  6. "name": "similarity",
  7. "data_type": "FLOAT32",
  8. "shape": [
  9. 1
  10. ],
  11. "data": [
  12. -6.077798
  13. ],
  14. "byte_buffer": {
  15. "array": "Un3CwA==",
  16. "order": "LITTLE_ENDIAN"
  17. }
  18. }
  19. ]
  20. },
  21. {
  22. "output": [
  23. {
  24. "name": "similarity",
  25. "data_type": "FLOAT32",
  26. "shape": [
  27. 1
  28. ],
  29. "data": [
  30. 10.223609
  31. ],
  32. "byte_buffer": {
  33. "array": "55MjQQ==",
  34. "order": "LITTLE_ENDIAN"
  35. }
  36. }
  37. ]
  38. },
  39. {
  40. "output": [
  41. {
  42. "name": "similarity",
  43. "data_type": "FLOAT32",
  44. "shape": [
  45. 1
  46. ],
  47. "data": [
  48. -1.3987057
  49. ],
  50. "byte_buffer": {
  51. "array": "ygizvw==",
  52. "order": "LITTLE_ENDIAN"
  53. }
  54. }
  55. ]
  56. },
  57. {
  58. "output": [
  59. {
  60. "name": "similarity",
  61. "data_type": "FLOAT32",
  62. "shape": [
  63. 1
  64. ],
  65. "data": [
  66. -4.5923924
  67. ],
  68. "byte_buffer": {
  69. "array": "4fSSwA==",
  70. "order": "LITTLE_ENDIAN"
  71. }
  72. }
  73. ]
  74. }
  75. ]
  76. }

A higher document score means higher similarity. In the preceding response, documents are scored as follows against the query text today is sunny:

Document textScore
how are you-6.077798
today is sunny10.223609
today is july fifth-1.3987057
it is winter-4.5923924

The document that contains the same text as the query is scored the highest, and the remaining documents are scored based on the text similarity.

To learn how to set up a vector index and use text embedding models for search, see Semantic search.

To learn how to set up a vector index and use sparse encoding models for search, see Neural sparse search.

To learn how to use cross-encoder models for reranking, see Reranking search results.