Text/image embedding processor
The text_image_embedding
processor is used to generate combined vector embeddings from text and image fields for multimodal neural search.
PREREQUISITE
Before using the text_image_embedding
processor, you must set up a machine learning (ML) model. For more information, see Choosing a model.
The following is the syntax for the text_image_embedding
processor:
{
"text_image_embedding": {
"model_id": "<model_id>",
"embedding": "<vector_field>",
"field_map": {
"text": "<input_text_field>",
"image": "<input_image_field>"
}
}
}
copy
Parameters
The following table lists the required and optional parameters for the text_image_embedding
processor.
Parameter | Data type | Required/Optional | Description |
---|---|---|---|
model_id | String | Required | The ID of the model that will be used to generate the embeddings. The model must be deployed in OpenSearch before it can be used in neural search. For more information, see Using custom models within OpenSearch and Multimodal search. |
embedding | String | Required | The name of the vector field in which to store the generated embeddings. A single embedding is generated for both text and image fields. |
field_map | Object | Required | Contains key-value pairs that specify the fields from which to generate embeddings. |
field_map.text | String | Optional | The name of the field from which to obtain text for generating vector embeddings. You must specify at least one text or image . |
field_map.image | String | Optional | The name of the field from which to obtain the image for generating vector embeddings. You must specify at least one text or image . |
description | String | Optional | A brief description of the processor. |
tag | String | Optional | An identifier tag for the processor. Useful for debugging to distinguish between processors of the same type. |
Using the processor
Follow these steps to use the processor in a pipeline. You must provide a model ID when creating the processor. For more information, see Using custom models within OpenSearch.
Step 1: Create a pipeline.
The following example request creates an ingest pipeline where the text from image_description
and the image from image_binary
will be converted into vector embeddings and the embeddings will be stored in vector_embedding
:
PUT /_ingest/pipeline/nlp-ingest-pipeline
{
"description": "A text/image embedding pipeline",
"processors": [
{
"text_image_embedding": {
"model_id": "bQ1J8ooBpBj3wT4HVUsb",
"embedding": "vector_embedding",
"field_map": {
"text": "image_description",
"image": "image_binary"
}
}
}
]
}
copy
You can set up multiple processors in one pipeline to generate embeddings for multiple fields.
Step 2 (Optional): Test the pipeline.
It is recommended that you test your pipeline before you ingest documents.
To test the pipeline, run the following query:
POST _ingest/pipeline/nlp-ingest-pipeline/_simulate
{
"docs": [
{
"_index": "testindex1",
"_id": "1",
"_source":{
"image_description": "Orange table",
"image_binary": "bGlkaHQtd29rfx43..."
}
}
]
}
copy
Response
The response confirms that in addition to the image_description
and image_binary
fields, the processor has generated vector embeddings in the vector_embedding
field:
{
"docs": [
{
"doc": {
"_index": "testindex1",
"_id": "1",
"_source": {
"vector_embedding": [
-0.048237972,
-0.07612712,
0.3262124,
...
-0.16352308
],
"image_description": "Orange table",
"image_binary": "bGlkaHQtd29rfx43..."
},
"_ingest": {
"timestamp": "2023-10-05T15:15:19.691345393Z"
}
}
}
]
}
Once you have created an ingest pipeline, you need to create an index for ingestion and ingest documents into the index. To learn more, see Step 2: Create an index for ingestion and Step 3: Ingest documents into the index of Multimodal search.
Next steps
- To learn how to use the
neural
query for a multimodal search, see Neural query. - To learn more about multimodal search, see Multimodal search.
- To learn more about using models in OpenSearch, see Choosing a model.
- For a comprehensive example, see Neural search tutorial.