Rerank processor

Introduced 2.12

The rerank search request processor intercepts search results and passes them to a cross-encoder model to be reranked. The model reranks the results, taking into account the scoring context. Then the processor orders documents in the search results based on their new scores.

Request fields

The following table lists all available request fields.

FieldData typeDescription
<reranker_type>ObjectThe reranker type provides the rerank processor with static information needed across all reranking calls. Required.
contextObjectProvides the rerank processor with information necessary for generating reranking context at query time.
tagStringThe processor’s identifier. Optional.
descriptionStringA description of the processor. Optional.
ignore_failureBooleanIf true, OpenSearch ignores any failure of this processor and continues to run the remaining processors in the search pipeline. Optional. Default is false.

The ml_opensearch reranker type

The ml_opensearch reranker type is designed to work with the cross-encoder model provided by OpenSearch. For this reranker type, specify the following fields.

FieldData typeDescription
ml_opensearchObjectProvides the rerank processor with model information. Required.
ml_opensearch.model_idStringThe model ID for the cross-encoder model. Required. For more information, see Using ML models.
context.document_fieldsArrayAn array of document fields that specifies the fields from which to retrieve context for the cross-encoder model. Required.

Example

The following example demonstrates using a search pipeline with a rerank processor.

Creating a search pipeline

The following request creates a search pipeline with a rerank response processor:

  1. PUT /_search/pipeline/rerank_pipeline
  2. {
  3. "response_processors": [
  4. {
  5. "rerank": {
  6. "ml_opensearch": {
  7. "model_id": "gnDIbI0BfUsSoeNT_jAw"
  8. },
  9. "context": {
  10. "document_fields": [ "title", "text_representation"]
  11. }
  12. }
  13. }
  14. ]
  15. }

copy

Using a search pipeline

Combine an OpenSearch query with an ext object that contains the query context for the large language model (LLM). Provide the query_text that will be used to rerank the results:

  1. POST /_search?search_pipeline=rerank_pipeline
  2. {
  3. "query": {
  4. "match": {
  5. "text_representation": "Where is Albuquerque?"
  6. }
  7. },
  8. "ext": {
  9. "rerank": {
  10. "query_context": {
  11. "query_text": "Where is Albuquerque?"
  12. }
  13. }
  14. }
  15. }

copy

Instead of specifying query_text, you can provide a full path to the field containing text to use for reranking. For example, if you specify a subfield query in the text_representation object, specify its path in the query_text_path parameter:

  1. POST /_search?search_pipeline=rerank_pipeline
  2. {
  3. "query": {
  4. "match": {
  5. "text_representation": {
  6. "query": "Where is Albuquerque?"
  7. }
  8. }
  9. },
  10. "ext": {
  11. "rerank": {
  12. "query_context": {
  13. "query_text_path": "query.match.text_representation.query"
  14. }
  15. }
  16. }
  17. }

copy

The query_context object contains the following fields.

Field nameDescription
query_textThe natural language text of the question that you want to use to rerank the search results. Either query_text or query_text_path (not both) is required.
query_text_pathThe full JSON path to the text of the question that you want to use to rerank the search results. Either query_text or query_text_path (not both) is required. The maximum number of characters in the path is 1000.

For more information about setting up reranking, see Reranking search results.