Connector blueprints

Introduced 2.9

Every connector is specified by a connector blueprint. The blueprint defines all the parameters you need to provide when creating a connector.

For example, the following blueprint is a specification for an Amazon SageMaker connector:

  1. {
  2. "name": "<YOUR CONNECTOR NAME>",
  3. "description": "<YOUR CONNECTOR DESCRIPTION>",
  4. "version": "<YOUR CONNECTOR VERSION>",
  5. "protocol": "aws_sigv4",
  6. "credential": {
  7. "access_key": "<YOUR AWS ACCESS KEY>",
  8. "secret_key": "<YOUR AWS SECRET KEY>",
  9. "session_token": "<YOUR AWS SECURITY TOKEN>"
  10. },
  11. "parameters": {
  12. "region": "<YOUR AWS REGION>",
  13. "service_name": "sagemaker"
  14. },
  15. "actions": [
  16. {
  17. "action_type": "predict",
  18. "method": "POST",
  19. "headers": {
  20. "content-type": "application/json"
  21. },
  22. "url": "<YOUR SAGEMAKER MODEL ENDPOINT URL>",
  23. "request_body": "<YOUR REQUEST BODY. Example: ${parameters.inputs}>"
  24. }
  25. ]
  26. }

copy

OpenSearch-provided connector blueprints

OpenSearch provides connector blueprints for several machine learning (ML) platforms and models. For a list of all connector blueprints provided by OpenSearch, see Supported connectors.

As an ML developer, you can build connector blueprints for other platforms. Using those blueprints, administrators and data scientists can create connectors for models hosted on those platforms.

Configuration parameters

FieldData typeIs requiredDescription
nameStringYesThe name of the connector.
descriptionStringYesA description of the connector.
versionIntegerYesThe version of the connector.
protocolStringYesThe protocol for the connection. For AWS services such as Amazon SageMaker and Amazon Bedrock, use aws_sigv4. For all other services, use http.
parametersJSON objectYesThe default connector parameters, including endpoint and model. Any parameters indicated in this field can be overridden by parameters specified in a predict request.
credentialJSON objectYesDefines any credential variables required to connect to your chosen endpoint. ML Commons uses AES/GCM/NoPadding symmetric encryption to encrypt your credentials. When the connection to the cluster first starts, OpenSearch creates a random 32-byte encryption key that persists in OpenSearch’s system index. Therefore, you do not need to manually set the encryption key.
actionsJSON arrayYesDefines what actions can run within the connector. If you’re an administrator creating a connection, add the blueprint for your desired connection.
backend_rolesJSON arrayYesA list of OpenSearch backend roles. For more information about setting up backend roles, see Assigning backend roles to users.
access_modeStringYesSets the access mode for the model, either public, restricted, or private. Default is private. For more information about access_mode, see Model groups.
add_all_backend_rolesBooleanYesWhen set to true, adds all backend_roles to the access list, which only a user with admin permissions can adjust. When set to false, non-admins can add backend_roles.
client_configJSON objectNoThe client configuration object, which provides settings that control the behavior of the client connections used by the connector. These settings allow you to manage connection limits and timeouts, ensuring efficient and reliable communication.

The actions parameter supports the following options.

FieldData typeDescription
action_typeStringRequired. Sets the ML Commons API operation to use upon connection. As of OpenSearch 2.9, only predict is supported.
methodStringRequired. Defines the HTTP method for the API call. Supports POST and GET.
urlStringRequired. Sets the connection endpoint at which the action occurs. This must match the regex expression for the connection used when adding trusted endpoints.
headersJSON objectSets the headers used inside the request or response body. Default is ContentType: application/json. If your third-party ML tool requires access control, define the required credential parameters in the headers parameter.
request_bodyStringRequired. Sets the parameters contained in the request body of the action. The parameters must include \”inputText\, which specifies how users of the connector should construct the request payload for the action_type.
pre_process_functionStringOptional. A built-in or custom Painless script used to preprocess the input data. OpenSearch provides the following built-in preprocess functions that you can call directly:
- connector.pre_process.cohere.embedding for Cohere embedding models
- connector.pre_process.openai.embedding for OpenAI embedding models
- connector.pre_process.default.embedding, which you can use to preprocess documents in neural search requests so that they are in the format that ML Commons can process with the default preprocessor (OpenSearch 2.11 or later). For more information, see Built-in functions.
post_process_functionStringOptional. A built-in or custom Painless script used to post-process the model output data. OpenSearch provides the following built-in post-process functions that you can call directly:
- connector.pre_process.cohere.embedding for Cohere text embedding models
- connector.pre_process.openai.embedding for OpenAI text embedding models
- connector.post_process.default.embedding, which you can use to post-process documents in the model response so that they are in the format that neural search expects (OpenSearch 2.11 or later). For more information, see Built-in functions.

The client_config parameter supports the following options.

FieldData typeDescription
max_connectionIntegerThe maximum number of concurrent connections that the client can establish to the server. Some remote services, like SageMaker, constrain the maximum number of concurrent connections and throw a throttling exception if the number of concurrent connections exceeds the threshold. The maximum number of concurrent OpenSearch connections is max_connection*node_number_for_connector. To mitigate this issue, try to decrease the value of this parameter and modify the retry settings in client_config. Default is 30.
connection_timeoutIntegerThe maximum amount of time (in seconds) that the client will wait while trying to establish a connection to the server. A timeout prevents the client from waiting indefinitely and allows the client to recover when it encounters unreachable network endpoints.
read_timeoutIntegerThe maximum amount of time (in seconds) that the client will wait for a response from the server after sending a request. This is useful when the server is slow to respond or encounters an issue while processing a request.
retry_backoff_policyStringThe backoff policy for retries to the remote connector. This is useful when there is spike in traffic causing throttling exceptions. Supported policies are constant, exponential_equal_jitter, and exponential_full_jitter. Default is constant.
max_retry_timesIntegerThe maximum number of times that a single remote inference request will be retried. This is useful when there is a spike in traffic causing throttling exceptions. When set to 0, retrying is disabled. When set to -1, OpenSearch does not limit the number of retry_times. Setting this to a positive integer specifies the maximum number of retry attempts. Default is 0.
retry_backoff_millisIntegerThe base backoff time in milliseconds for retry policy. The suspend time during two retries is determined by this parameter and retry_backoff_policy. Default is 200.
retry_timeout_secondsIntegerThe timeout value, in seconds, for the retry. If the retry can not succeed within the specified amount of time, the connector will stop retrying and throw an exception. Default is 30.

Built-in pre- and post-processing functions

Call the built-in pre- and post-processing functions instead of writing a custom Painless script when connecting to the following text embedding models or your own text embedding models deployed on a remote server (for example, Amazon SageMaker):

OpenSearch provides the following pre- and post-processing functions:

  • OpenAI: connector.pre_process.openai.embedding and connector.post_process.openai.embedding
  • Cohere: connector.pre_process.cohere.embedding and connector.post_process.cohere.embedding
  • Amazon SageMaker default functions for neural search: connector.pre_process.default.embedding and connector.post_process.default.embedding

Amazon SageMaker default pre- and post-processing functions for neural search

When you perform vector search using neural search, the neural search request is routed first to ML Commons and then to the model. If the model is one of the pretrained models provided by OpenSearch, it can parse the ML Commons request and return the response in the format that ML Commons expects. However, for a model hosted on an external platform, the expected format may be different from the ML Commons format. The default pre- and post-processing functions translate between the format that the model expects and the format that neural search expects.

For the default functions to be applied, the model input and output must be in the format described in the following sections.

Example request

The following example request creates a SageMaker text embedding connector and calls the default post-processing function:

  1. POST /_plugins/_ml/connectors/_create
  2. {
  3. "name": "Sagemaker text embedding connector",
  4. "description": "The connector to Sagemaker",
  5. "version": 1,
  6. "protocol": "aws_sigv4",
  7. "credential": {
  8. "access_key": "<YOUR SAGEMAKER ACCESS KEY>",
  9. "secret_key": "<YOUR SAGEMAKER SECRET KEY>",
  10. "session_token": "<YOUR AWS SECURITY TOKEN>"
  11. },
  12. "parameters": {
  13. "region": "ap-northeast-1",
  14. "service_name": "sagemaker"
  15. },
  16. "actions": [
  17. {
  18. "action_type": "predict",
  19. "method": "POST",
  20. "url": "sagemaker.ap-northeast-1.amazonaws.com/endpoints/",
  21. "headers": {
  22. "content-type": "application/json"
  23. },
  24. "post_process_function": "connector.post_process.default.embedding",
  25. "request_body": "${parameters.input}"
  26. }
  27. ]
  28. }

copy

The request_body template must be ${parameters.input}.

Preprocessing function

The connector.pre_process.default.embedding default preprocessing function parses the neural search request and transforms it into the format that the model expects as input.

The ML Commons Predict API provides parameters in the following format:

  1. {
  2. "parameters": {
  3. "input": ["hello", "world"]
  4. }
  5. }

The default preprocessing function sends the input field contents to the model. Thus, the model input format must be a list of strings, for example:

  1. ["hello", "world"]

Post-processing function

The connector.post_process.default.embedding default post-processing function parses the model response and transforms it into the format that neural search expects as input.

The remote text embedding model output must be a two-dimensional float array, each element of which represents an embedding of a string from the input list. For example, the following two-dimensional array corresponds to the embedding of the list ["hello", "world"]:

  1. [
  2. [
  3. -0.048237994,
  4. -0.07612697,
  5. ...
  6. ],
  7. [
  8. 0.32621247,
  9. 0.02328475,
  10. ...
  11. ]
  12. ]

Custom pre- and post-processing functions

You can write your own pre- and post-processing functions specifically for your model format. For example, the following Amazon Bedrock connector definition contains custom pre- and post-processing functions for the Amazon Bedrock Titan embedding model:

  1. POST /_plugins/_ml/connectors/_create
  2. {
  3. "name": "Amazon Bedrock Connector: embedding",
  4. "description": "The connector to the Bedrock Titan embedding model",
  5. "version": 1,
  6. "protocol": "aws_sigv4",
  7. "parameters": {
  8. "region": "<YOUR AWS REGION>",
  9. "service_name": "bedrock"
  10. },
  11. "credential": {
  12. "access_key": "<YOUR AWS ACCESS KEY>",
  13. "secret_key": "<YOUR AWS SECRET KEY>",
  14. "session_token": "<YOUR AWS SECURITY TOKEN>"
  15. },
  16. "actions": [
  17. {
  18. "action_type": "predict",
  19. "method": "POST",
  20. "url": "https://bedrock-runtime.us-east-1.amazonaws.com/model/amazon.titan-embed-text-v1/invoke",
  21. "headers": {
  22. "content-type": "application/json",
  23. "x-amz-content-sha256": "required"
  24. },
  25. "request_body": "{ \"inputText\": \"${parameters.inputText}\" }",
  26. "pre_process_function": "\n StringBuilder builder = new StringBuilder();\n builder.append(\"\\\"\");\n String first = params.text_docs[0];\n builder.append(first);\n builder.append(\"\\\"\");\n def parameters = \"{\" +\"\\\"inputText\\\":\" + builder + \"}\";\n return \"{\" +\"\\\"parameters\\\":\" + parameters + \"}\";",
  27. "post_process_function": "\n def name = \"sentence_embedding\";\n def dataType = \"FLOAT32\";\n if (params.embedding == null || params.embedding.length == 0) {\n return params.message;\n }\n def shape = [params.embedding.length];\n def json = \"{\" +\n \"\\\"name\\\":\\\"\" + name + \"\\\",\" +\n \"\\\"data_type\\\":\\\"\" + dataType + \"\\\",\" +\n \"\\\"shape\\\":\" + shape + \",\" +\n \"\\\"data\\\":\" + params.embedding +\n \"}\";\n return json;\n "
  28. }
  29. ]
  30. }

copy

Next steps