k-NN index

The k-NN plugin introduces a custom data type, the knn_vector, that allows users to ingest their k-NN vectors into an OpenSearch index and perform different kinds of k-NN search. The knn_vector field is highly configurable and can serve many different k-NN workloads. For more information, see k-NN vector.

To create a k-NN index, set the settings.index.knn parameter to true:

  1. PUT /test-index
  2. {
  3. "settings": {
  4. "index": {
  5. "knn": true
  6. }
  7. },
  8. "mappings": {
  9. "properties": {
  10. "my_vector1": {
  11. "type": "knn_vector",
  12. "dimension": 3,
  13. "space_type": "l2",
  14. "method": {
  15. "name": "hnsw",
  16. "engine": "lucene",
  17. "parameters": {
  18. "ef_construction": 128,
  19. "m": 24
  20. }
  21. }
  22. }
  23. }
  24. }
  25. }

copy

Byte vectors

Starting with k-NN plugin version 2.17, you can use byte vectors with the faiss and lucene engines to reduce the amount of required memory and storage space. For more information, see Byte vectors.

Binary vectors

Starting with k-NN plugin version 2.16, you can use binary vectors with the faiss engine to reduce the amount of required storage space. For more information, see Binary vectors.

SIMD optimization for the Faiss engine

Starting with version 2.13, the k-NN plugin supports Single Instruction Multiple Data (SIMD) processing if the underlying hardware supports SIMD instructions (AVX2 on x64 architecture and Neon on ARM64 architecture). SIMD is supported by default on Linux machines only for the Faiss engine. SIMD architecture helps boost overall performance by improving indexing throughput and reducing search latency. Starting with version 2.18, the k-NN plugin supports AVX512 SIMD instructions on x64 architecture.

SIMD optimization is applicable only if the vector dimension is a multiple of 8.

x64 architecture

For x64 architecture, the following versions of the Faiss library are built and shipped with the artifact:

  • libopensearchknn_faiss.so: The non-optimized Faiss library without SIMD instructions.
  • libopensearchknn_faiss_avx512.so: The Faiss library containing AVX512 SIMD instructions.
  • libopensearchknn_faiss_avx2.so: The Faiss library containing AVX2 SIMD instructions.

When using the Faiss library, the performance ranking is as follows: AVX512 > AVX2 > no optimization.

If your hardware supports AVX512, the k-NN plugin loads the libopensearchknn_faiss_avx512.so library at runtime.

If your hardware supports AVX2 but doesn’t support AVX512, the k-NN plugin loads the libopensearchknn_faiss_avx2.so library at runtime.

To disable the AVX512 and AVX2 SIMD instructions and load the non-optimized Faiss library (libopensearchknn_faiss.so), specify the knn.faiss.avx512.disabled and knn.faiss.avx2.disabled static settings as true in opensearch.yml (by default, both of these are false).

Note that to update a static setting, you must stop the cluster, change the setting, and restart the cluster. For more information, see Static settings.

ARM64 architecture

For the ARM64 architecture, only one performance-boosting Faiss library (libopensearchknn_faiss.so) is built and shipped. The library contains Neon SIMD instructions and cannot be disabled.

Method definitions

A method definition refers to the underlying configuration of the approximate k-NN algorithm you want to use. Method definitions are used to either create a knn_vector field (when the method does not require training) or create a model during training that can then be used to create a knn_vector field.

A method definition will always contain the name of the method, the space_type the method is built for, the engine (the library) to use, and a map of parameters.

Mapping parameterRequiredDefaultUpdatableDescription
nametruen/afalseThe identifier for the nearest neighbor method.
space_typefalsel2falseThe vector space used to calculate the distance between vectors. Note: This value can also be specified at the top level of the mapping.
enginefalsefaissfalseThe approximate k-NN library to use for indexing and search. The available libraries are faiss, nmslib, and lucene.
parametersfalsenullfalseThe parameters used for the nearest neighbor method.

Supported nmslib methods

Method nameRequires trainingSupported spacesDescription
hnswfalsel2, innerproduct, cosinesimil, l1, linfHierarchical proximity graph approach to approximate k-NN search. For more details on the algorithm, see this abstract.

HNSW parameters

Parameter nameRequiredDefaultUpdatableDescription
ef_constructionfalse100falseThe size of the dynamic list used during k-NN graph creation. Higher values result in a more accurate graph but slower indexing speed.
mfalse16falseThe number of bidirectional links that the plugin creates for each new element. Increasing and decreasing this value can have a large impact on memory consumption. Keep this value between 2 and 100.

For nmslib, ef_search is set in the index settings.

An index created in OpenSearch version 2.11 or earlier will still use the old ef_construction value (512).

Supported Faiss methods

Method nameRequires trainingSupported spacesDescription
hnswfalsel2, innerproduct, hammingHierarchical proximity graph approach to approximate k-NN search.
ivftruel2, innerproduct, hammingStands for inverted file index. Bucketing approach where vectors are assigned different buckets based on clustering and, during search, only a subset of the buckets is searched.

For hnsw, “innerproduct” is not available when PQ is used.

The hamming space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see Binary k-NN vectors.

HNSW parameters

Parameter nameRequiredDefaultUpdatableDescription
ef_searchfalse100falseThe size of the dynamic list used during k-NN searches. Higher values result in more accurate but slower searches.
ef_constructionfalse100falseThe size of the dynamic list used during k-NN graph creation. Higher values result in a more accurate graph but slower indexing speed.
mfalse16falseThe number of bidirectional links that the plugin creates for each new element. Increasing and decreasing this value can have a large impact on memory consumption. Keep this value between 2 and 100.
encoderfalseflatfalseEncoder definition for encoding vectors. Encoders can reduce the memory footprint of your index, at the expense of search accuracy.

An index created in OpenSearch version 2.11 or earlier will still use the old ef_construction and ef_search values (512).

IVF parameters

Parameter nameRequiredDefaultUpdatableDescription
nlistfalse4falseNumber of buckets to partition vectors into. Higher values may lead to more accurate searches at the expense of memory and training latency. For more information about choosing the right value, refer to Guidelines to choose an index.
nprobesfalse1falseNumber of buckets to search during query. Higher values lead to more accurate but slower searches.
encoderfalseflatfalseEncoder definition for encoding vectors. Encoders can reduce the memory footprint of your index, at the expense of search accuracy.

For more information about setting these parameters, refer to the Faiss documentation.

IVF training requirements

The IVF algorithm requires a training step. To create an index that uses IVF, you need to train a model with the Train API, passing the IVF method definition. IVF requires that, at a minimum, there are nlist training data points, but it is recommended that you use more than this. Training data can be composed of either the same data that is going to be ingested or a separate dataset.

Supported Lucene methods

Method nameRequires trainingSupported spacesDescription
hnswfalsel2, cosinesimil, innerproduct (supported in OpenSearch 2.13 and later)Hierarchical proximity graph approach to approximate k-NN search.

HNSW parameters

Parameter nameRequiredDefaultUpdatableDescription
ef_constructionfalse100falseThe size of the dynamic list used during k-NN graph creation. Higher values result in a more accurate graph but slower indexing speed.
The Lucene engine uses the proprietary term “beam_width” to describe this function, which corresponds directly to “ef_construction”. To be consistent throughout the OpenSearch documentation, we retain the term “ef_construction” for this parameter.
mfalse16falseThe number of bidirectional links that the plugin creates for each new element. Increasing and decreasing this value can have a large impact on memory consumption. Keep this value between 2 and 100.
The Lucene engine uses the proprietary term “max_connections” to describe this function, which corresponds directly to “m”. To be consistent throughout OpenSearch documentation, we retain the term “m” to label this parameter.

Lucene HNSW implementation ignores ef_search and dynamically sets it to the value of “k” in the search request. Therefore, there is no need to make settings for ef_search when using the Lucene engine.

An index created in OpenSearch version 2.11 or earlier will still use the old ef_construction value (512).

  1. "method": {
  2. "name":"hnsw",
  3. "engine":"lucene",
  4. "parameters":{
  5. "m":2048,
  6. "ef_construction": 245
  7. }
  8. }

Supported Faiss encoders

You can use encoders to reduce the memory footprint of a k-NN index at the expense of search accuracy. The k-NN plugin currently supports the flat, pq, and sq encoders in the Faiss library.

The following example method definition specifies the hnsw method and a pq encoder:

  1. "method": {
  2. "name":"hnsw",
  3. "engine":"faiss",
  4. "parameters":{
  5. "encoder":{
  6. "name":"pq",
  7. "parameters":{
  8. "code_size": 8,
  9. "m": 8
  10. }
  11. }
  12. }
  13. }

The hnsw method supports the pq encoder for OpenSearch versions 2.10 and later. The code_size parameter of a pq encoder with the hnsw method must be 8.

Encoder nameRequires trainingDescription
flat (Default)falseEncode vectors as floating-point arrays. This encoding does not reduce memory footprint.
pqtrueAn abbreviation for product quantization, it is a lossy compression technique that uses clustering to encode a vector into a fixed size of bytes, with the goal of minimizing the drop in k-NN search accuracy. At a high level, vectors are broken up into m subvectors, and then each subvector is represented by a code_size code obtained from a code book produced during training. For more information about product quantization, see this blog post.
sqfalseAn abbreviation for scalar quantization. Starting with k-NN plugin version 2.13, you can use the sq encoder to quantize 32-bit floating-point vectors into 16-bit floats. In version 2.13, the built-in sq encoder is the SQFP16 Faiss encoder. The encoder reduces memory footprint with a minimal loss of precision and improves performance by using SIMD optimization (using AVX2 on x86 architecture or Neon on ARM64 architecture). For more information, see Faiss scalar quantization.

PQ parameters

Parameter nameRequiredDefaultUpdatableDescription
mfalse1falseDetermines the number of subvectors into which to break the vector. Subvectors are encoded independently of each other. This vector dimension must be divisible by m. Maximum value is 1,024.
code_sizefalse8falseDetermines the number of bits into which to encode a subvector. Maximum value is 8. For IVF, this value must be less than or equal to 8. For HNSW, this value can only be 8.

SQ parameters

Parameter nameRequiredDefaultUpdatableDescription
typefalsefp16falseThe type of scalar quantization to be used to encode 32-bit float vectors into the corresponding type. As of OpenSearch 2.13, only the fp16 encoder type is supported. For the fp16 encoder, vector values must be in the [-65504.0, 65504.0] range.
clipfalsefalsefalseIf true, then any vector values outside of the supported range for the specified vector type are rounded so that they are in the range. If false, then the request is rejected if any vector values are outside of the supported range. Setting clip to true may decrease recall.

For more information and examples, see Using Faiss scalar quantization.

Examples

The following example uses the ivf method without specifying an encoder (by default, OpenSearch uses the flat encoder):

  1. "method": {
  2. "name":"ivf",
  3. "engine":"faiss",
  4. "parameters":{
  5. "nlist": 4,
  6. "nprobes": 2
  7. }
  8. }

The following example uses the ivf method with a pq encoder:

  1. "method": {
  2. "name":"ivf",
  3. "engine":"faiss",
  4. "parameters":{
  5. "encoder":{
  6. "name":"pq",
  7. "parameters":{
  8. "code_size": 8,
  9. "m": 8
  10. }
  11. }
  12. }
  13. }

The following example uses the hnsw method without specifying an encoder (by default, OpenSearch uses the flat encoder):

  1. "method": {
  2. "name":"hnsw",
  3. "engine":"faiss",
  4. "parameters":{
  5. "ef_construction": 256,
  6. "m": 8
  7. }
  8. }

The following example uses the hnsw method with an sq encoder of type fp16 with clip enabled:

  1. "method": {
  2. "name":"hnsw",
  3. "engine":"faiss",
  4. "parameters":{
  5. "encoder": {
  6. "name": "sq",
  7. "parameters": {
  8. "type": "fp16",
  9. "clip": true
  10. }
  11. },
  12. "ef_construction": 256,
  13. "m": 8
  14. }
  15. }

The following example uses the ivf method with an sq encoder of type fp16:

  1. "method": {
  2. "name":"ivf",
  3. "engine":"faiss",
  4. "parameters":{
  5. "encoder": {
  6. "name": "sq",
  7. "parameters": {
  8. "type": "fp16",
  9. "clip": false
  10. }
  11. },
  12. "nprobes": 2
  13. }
  14. }

Choosing the right method

There are several options to choose from when building your knn_vector field. To determine the correct methods and parameters, you should first understand the requirements of your workload and what trade-offs you are willing to make. Factors to consider are (1) query latency, (2) query quality, (3) memory limits, and (4) indexing latency.

If memory is not a concern, HNSW offers a strong query latency/query quality trade-off.

If you want to use less memory and increase indexing speed as compared to HNSW while maintaining similar query quality, you should evaluate IVF.

If memory is a concern, consider adding a PQ encoder to your HNSW or IVF index. Because PQ is a lossy encoding, query quality will drop.

You can reduce the memory footprint by a factor of 2, with a minimal loss in search quality, by using the fp_16 encoder. If your vector dimensions are within the [-128, 127] byte range, we recommend using the byte quantizer to reduce the memory footprint by a factor of 4. To learn more about vector quantization options, see k-NN vector quantization.

Memory estimation

In a typical OpenSearch cluster, a certain portion of RAM is reserved for the JVM heap. The k-NN plugin allocates native library indexes to a portion of the remaining RAM. This portion’s size is determined by the circuit_breaker_limit cluster setting. By default, the limit is set to 50%.

Having a replica doubles the total number of vectors.

For information about using memory estimation with vector quantization, see the vector quantization documentation.

HNSW memory estimation

The memory required for HNSW is estimated to be 1.1 * (4 * dimension + 8 * M) bytes/vector.

As an example, assume you have a million vectors with a dimension of 256 and M of 16. The memory requirement can be estimated as follows:

  1. 1.1 * (4 * 256 + 8 * 16) * 1,000,000 ~= 1.267 GB

IVF memory estimation

The memory required for IVF is estimated to be 1.1 * (((4 * dimension) * num_vectors) + (4 * nlist * d)) bytes.

As an example, assume you have a million vectors with a dimension of 256 and nlist of 128. The memory requirement can be estimated as follows:

  1. 1.1 * (((4 * 256) * 1,000,000) + (4 * 128 * 256)) ~= 1.126 GB

Index settings

Additionally, the k-NN plugin introduces several index settings that can be used to configure the k-NN structure as well.

At the moment, several parameters defined in the settings are in the deprecation process. Those parameters should be set in the mapping instead of the index settings. Parameters set in the mapping will override the parameters set in the index settings. Setting the parameters in the mapping allows an index to have multiple knn_vector fields with different parameters.

SettingDefaultUpdatableDescription
index.knnfalsefalseWhether the index should build native library indexes for the knn_vector fields. If set to false, the knn_vector fields will be stored in doc values, but approximate k-NN search functionality will be disabled.
index.knn.algo_param.ef_search100trueThe size of the dynamic list used during k-NN searches. Higher values result in more accurate but slower searches. Only available for NMSLIB.
index.knn.advanced.approximate_threshold15,000trueThe number of vectors a segment must have before creating specialized data structures for approximate search. Set to -1 to disable building vector data structures and 0 to always build them.
index.knn.algo_param.ef_construction100falseDeprecated in 1.0.0. Instead, use the mapping parameters to set this value.
index.knn.algo_param.m16falseDeprecated in 1.0.0. Use the mapping parameters to set this value instead.
index.knn.space_typel2falseDeprecated in 1.0.0. Use the mapping parameters to set this value instead.

An index created in OpenSearch version 2.11 or earlier will still use the old ef_construction and ef_search values (512).