Vector search

OpenSearch is a comprehensive search platform that supports a variety of data types, including vectors. OpenSearch vector database functionality is seamlessly integrated with its generic database function.

In OpenSearch, you can generate vector embeddings, store those embeddings in an index, and use them for vector search. Choose one of the following options:

  • Generate embeddings using a library of your choice before ingesting them into OpenSearch. Once you ingest vectors into an index, you can perform a vector similarity search on the vector space. For more information, see Working with embeddings generated outside of OpenSearch.
  • Automatically generate embeddings within OpenSearch. To use embeddings for semantic search, the ingested text (the corpus) and the query need to be embedded using the same model. Neural search packages this functionality, eliminating the need to manage the internal details. For more information, see Generating vector embeddings within OpenSearch.

Working with embeddings generated outside of OpenSearch

After you generate vector embeddings, upload them to an OpenSearch index and search the index using vector search. For a complete example, see Example.

k-NN index

To build a vector database and use vector search, you must specify your index as a k-NN index when creating it by setting index.knn to true:

  1. PUT test-index
  2. {
  3. "settings": {
  4. "index": {
  5. "knn": true,
  6. "knn.algo_param.ef_search": 100
  7. }
  8. },
  9. "mappings": {
  10. "properties": {
  11. "my_vector1": {
  12. "type": "knn_vector",
  13. "dimension": 1024,
  14. "method": {
  15. "name": "hnsw",
  16. "space_type": "l2",
  17. "engine": "nmslib",
  18. "parameters": {
  19. "ef_construction": 128,
  20. "m": 24
  21. }
  22. }
  23. }
  24. }
  25. }
  26. }

copy

k-NN vector

You must designate the field that will store vectors as a knn_vector field type. OpenSearch supports vectors of up to 16,000 dimensions, each of which is represented as a 32-bit or 16-bit float.

To save storage space, you can use byte or binary vectors. For more information, see Lucene byte vector and Binary k-NN vectors.

Vector search finds the vectors in your database that are most similar to the query vector. OpenSearch supports the following search methods:

  • Approximate search (approximate k-NN, or ANN): Returns approximate nearest neighbors to the query vector. Usually, approximate search algorithms sacrifice indexing speed and search accuracy in exchange for performance benefits such as lower latency, smaller memory footprints, and more scalable search. For most use cases, approximate search is the best option.

  • Exact search (exact k-NN): A brute-force, exact k-NN search of vector fields. OpenSearch supports the following types of exact search:

    • Exact k-NN with scoring script: Using the k-NN scoring script, you can apply a filter to an index before executing the nearest neighbor search.
    • Painless extensions: Adds the distance functions as Painless extensions that you can use in more complex combinations. You can use this method to perform a brute-force, exact k-NN search of an index, which also supports pre-filtering.

OpenSearch supports several algorithms for approximate vector search, each with its own advantages. For complete documentation, see Approximate search. For more information about the search methods and engines, see Method definitions. For method recommendations, see Choosing the right method.

To use approximate vector search, specify one of the following search methods (algorithms) in the method parameter:

  • Hierarchical Navigable Small World (HNSW)
  • Inverted File System (IVF)

Additionally, specify the engine (library) that implements this method in the engine parameter:

The following table lists the combinations of search methods and libraries supported by the k-NN engine for approximate vector search.

MethodEngine
HNSWNMSLIB, Faiss, Lucene
IVFFaiss

Engine recommendations

In general, select NMSLIB or Faiss for large-scale use cases. Lucene is a good option for smaller deployments and offers benefits like smart filtering, where the optimal filtering strategy—pre-filtering, post-filtering, or exact k-NN—is automatically applied depending on the situation. The following table summarizes the differences between each option.

 NMSLIB/HNSWFaiss/HNSWFaiss/IVFLucene/HNSW
Max dimensions16,00016,00016,0001,024
FilterPost-filterPost-filterPost-filterFilter during search
Training requiredNoNoYesNo
Similarity metricsl2, innerproduct, cosinesimil, l1, linfl2, innerproductl2, innerproductl2, cosinesimil
Number of vectorsTens of billionsTens of billionsTens of billionsLess than 10 million
Indexing latencyLowLowLowestLow
Query latency and qualityLow latency and high qualityLow latency and high qualityLow latency and low qualityHigh latency and high quality
Vector compressionFlatFlat
Product quantization
Flat
Product quantization
Flat
Memory consumptionHighHigh
Low with PQ
Medium
Low with PQ
High

Example

In this example, you’ll create a k-NN index, add data to the index, and search the data.

Step 1: Create a k-NN index

First, create an index that will store sample hotel data. Set index.knn to true and specify the location field as a knn_vector:

  1. PUT /hotels-index
  2. {
  3. "settings": {
  4. "index": {
  5. "knn": true,
  6. "knn.algo_param.ef_search": 100,
  7. "number_of_shards": 1,
  8. "number_of_replicas": 0
  9. }
  10. },
  11. "mappings": {
  12. "properties": {
  13. "location": {
  14. "type": "knn_vector",
  15. "dimension": 2,
  16. "method": {
  17. "name": "hnsw",
  18. "space_type": "l2",
  19. "engine": "lucene",
  20. "parameters": {
  21. "ef_construction": 100,
  22. "m": 16
  23. }
  24. }
  25. }
  26. }
  27. }
  28. }

copy

Step 2: Add data to your index

Next, add data to your index. Each document represents a hotel. The location field in each document contains a vector specifying the hotel’s location:

  1. POST /_bulk
  2. { "index": { "_index": "hotels-index", "_id": "1" } }
  3. { "location": [5.2, 4.4] }
  4. { "index": { "_index": "hotels-index", "_id": "2" } }
  5. { "location": [5.2, 3.9] }
  6. { "index": { "_index": "hotels-index", "_id": "3" } }
  7. { "location": [4.9, 3.4] }
  8. { "index": { "_index": "hotels-index", "_id": "4" } }
  9. { "location": [4.2, 4.6] }
  10. { "index": { "_index": "hotels-index", "_id": "5" } }
  11. { "location": [3.3, 4.5] }

copy

Step 3: Search your data

Now search for hotels closest to the pin location [5, 4]. This location is labeled Pin in the following image. Each hotel is labeled with its document number.

Hotels on a coordinate plane

To search for the top three closest hotels, set k to 3:

  1. POST /hotels-index/_search
  2. {
  3. "size": 3,
  4. "query": {
  5. "knn": {
  6. "location": {
  7. "vector": [
  8. 5,
  9. 4
  10. ],
  11. "k": 3
  12. }
  13. }
  14. }
  15. }

copy

The response contains the hotels closest to the specified pin location:

  1. {
  2. "took": 1093,
  3. "timed_out": false,
  4. "_shards": {
  5. "total": 1,
  6. "successful": 1,
  7. "skipped": 0,
  8. "failed": 0
  9. },
  10. "hits": {
  11. "total": {
  12. "value": 3,
  13. "relation": "eq"
  14. },
  15. "max_score": 0.952381,
  16. "hits": [
  17. {
  18. "_index": "hotels-index",
  19. "_id": "2",
  20. "_score": 0.952381,
  21. "_source": {
  22. "location": [
  23. 5.2,
  24. 3.9
  25. ]
  26. }
  27. },
  28. {
  29. "_index": "hotels-index",
  30. "_id": "1",
  31. "_score": 0.8333333,
  32. "_source": {
  33. "location": [
  34. 5.2,
  35. 4.4
  36. ]
  37. }
  38. },
  39. {
  40. "_index": "hotels-index",
  41. "_id": "3",
  42. "_score": 0.72992706,
  43. "_source": {
  44. "location": [
  45. 4.9,
  46. 3.4
  47. ]
  48. }
  49. }
  50. ]
  51. }
  52. }

Vector search with filtering

For information about vector search with filtering, see k-NN search with filters.

Generating vector embeddings in OpenSearch

Neural search encapsulates the infrastructure needed to perform semantic vector searches. After you integrate an inference (embedding) service, neural search functions like lexical search, accepting a textual query and returning relevant documents.

When you index your data, neural search transforms text into vector embeddings and indexes both the text and its vector embeddings in a vector index. When you use a neural query during search, neural search converts the query text into vector embeddings and uses vector search to return the results.

Choosing a model

The first step in setting up neural search is choosing a model. You can upload a model to your OpenSearch cluster, use one of the pretrained models provided by OpenSearch, or connect to an externally hosted model. For more information, see Integrating ML models.

Neural search tutorial

For a step-by-step tutorial, see Neural search tutorial.

Search methods

Choose one of the following search methods to use your model for neural search:

  • Semantic search: Uses dense retrieval based on text embedding models to search text data.

  • Hybrid search: Combines lexical and neural search to improve search relevance.

  • Multimodal search: Uses neural search with multimodal embedding models to search text and image data.

  • Neural sparse search: Uses neural search with sparse retrieval based on sparse embedding models to search text data.

  • Conversational search: With conversational search, you can ask questions in natural language, receive a text response, and ask additional clarifying questions.