Neural sparse query two-phase processor

Introduced 2.15

The neural_sparse_two_phase_processor search processor is designed to provide faster search pipelines for neural sparse search. It accelerates the neural sparse query by dividing the original method of scoring all documents with all tokens into two steps:

  1. High-weight tokens score the documents and filter out the top documents.
  2. Low-weight tokens rescore the top documents.

Request body fields

The following table lists all available request fields.

FieldData typeDescription
enabledBooleanControls whether the two-phase processor is enabled. Default is true.
two_phase_parameterObjectA map of key-value pairs representing the two-phase parameters and their associated values. You can specify the value of prune_ratio, expansion_rate, max_window_size, or any combination of these three parameters. Optional.
two_phase_parameter.prune_ratioFloatA ratio that represents how to split the high-weight tokens and low-weight tokens. The threshold is the token’s maximum score multiplied by its prune_ratio. Valid range is [0,1]. Default is 0.4
two_phase_parameter.expansion_rateFloatThe rate at which documents will be fine-tuned during the second phase. The second-phase document number equals the query size (default is 10) multiplied by its expansion rate. Valid range is greater than 1.0. Default is 5.0
two_phase_parameter.max_window_sizeIntThe maximum number of documents that can be processed using the two-phase processor. Valid range is greater than 50. Default is 10000.
tagStringThe processor’s identifier. Optional.
descriptionStringA description of the processor. Optional.

Example

The following example creates a search pipeline with a neural_sparse_two_phase_processor search request processor.

Create search pipeline

The following example request creates a search pipeline with a neural_sparse_two_phase_processor search request processor. The processor sets a custom model ID at the index level and provides different default model IDs for two specific index fields:

  1. PUT /_search/pipeline/two_phase_search_pipeline
  2. {
  3. "request_processors": [
  4. {
  5. "neural_sparse_two_phase_processor": {
  6. "tag": "neural-sparse",
  7. "description": "This processor is making two-phase processor.",
  8. "enabled": true,
  9. "two_phase_parameter": {
  10. "prune_ratio": custom_prune_ratio,
  11. "expansion_rate": custom_expansion_rate,
  12. "max_window_size": custom_max_window_size
  13. }
  14. }
  15. }
  16. ]
  17. }

copy

Set search pipeline

After the two-phase pipeline is created, set the index.search.default_pipeline setting to the name of the pipeline for the index on which you want to use the two-phase pipeline:

  1. PUT /index-name/_settings
  2. {
  3. "index.search.default_pipeline" : "two_phase_search_pipeline"
  4. }

copy

Limitation

The neural_sparse_two_phase_processor has the following limitations.

Version support

The neural_sparse_two_phase_processor can only be used with OpenSearch 2.15 or later.

Compound query support

As of OpenSearch 2.15, only the Boolean compound query is supported.

Neural sparse queries and Boolean queries with a boost parameter (not boosting queries) are also supported.

Examples

The following examples show neural sparse queries with the supported query types.

Single neural sparse query

  1. GET /my-nlp-index/_search
  2. {
  3. "query": {
  4. "neural_sparse": {
  5. "passage_embedding": {
  6. "query_text": "Hi world"
  7. "model_id": <model-id>
  8. }
  9. }
  10. }
  11. }

copy

Neural sparse query nested in a Boolean query

  1. GET /my-nlp-index/_search
  2. {
  3. "query": {
  4. "bool": {
  5. "should": [
  6. {
  7. "neural_sparse": {
  8. "passage_embedding": {
  9. "query_text": "Hi world",
  10. "model_id": <model-id>
  11. },
  12. "boost": 2.0
  13. }
  14. }
  15. ]
  16. }
  17. }
  18. }

copy

P99 latency metrics

Using an OpenSearch cluster set up on three m5.4xlarge Amazon Elastic Compute Cloud (Amazon EC2) instances, OpenSearch conducts neural sparse query P99 latency tests on indexes corresponding to more than 10 datasets.

Doc-only mode latency metric

In doc-only mode, the two-phase processor can significantly decrease query latency, as shown by the following latency metrics:

  • Average latency without the two-phase processor: 53.56 ms
  • Average latency with the two-phase processor: 38.61 ms

This results in an overall latency reduction of approximately 27.92%. Most indexes show a significant latency reduction when using the two-phase processor, with reductions ranging from 5.14 to 84.6%. The specific latency optimization values depend on the data distribution within the indexes.

Bi-encoder mode latency metric

In bi-encoder mode, the two-phase processor can significantly decrease query latency, as shown by the following latency metrics:

  • Average latency without the two-phase processor: 300.79 ms
  • Average latency with the two-phase processor: 121.64 ms

This results in an overall latency reduction of approximately 59.56%. Most indexes show a significant latency reduction when using the two-phase processor, with reductions ranging from 1.56 to 82.84%. The specific latency optimization values depend on the data distribution within the indexes.