OpenSearch Vector Engine can now run vector search at a 3rd of the price on OpenSearch 2.17+ domains. Now you can configure k-NN (vector) indexes to run on disk mode, optimizing it for memory-constrained environments, and allow low-cost, correct vector search that responds in low a whole lot of milliseconds. Disk mode offers a cheap different to reminiscence mode while you don’t want close to single-digit latency.
On this put up, you’ll study the advantages of this new function, the underlying mechanics, buyer success tales, and getting began.
Overview of vector search and the OpenSearch Vector Engine
Vector search is a method that improves search high quality by enabling similarity matching on content material that has been encoded by machine studying (ML) fashions into vectors (numerical encodings). It permits use instances like semantic search, permitting you to think about context and intent together with key phrases to ship extra related searches.
OpenSearch Vector Engine permits real-time vector searches past billions of vectors by creating indexes on vectorized content material. You’ll be able to then run searches for the highest Okay paperwork in an index which can be most much like a given question vector, which could possibly be a query, key phrase, or content material (comparable to a picture, audio clip, or textual content) that has been encoded by the identical ML mannequin.
Tuning the OpenSearch Vector Engine
Search purposes have various necessities when it comes to velocity, high quality, and value. As an illustration, ecommerce catalogs require the bottom attainable response occasions and high-quality search to ship a constructive procuring expertise. Nonetheless, optimizing for search high quality and efficiency beneficial properties typically incurs value within the type of further reminiscence and compute.
The proper stability of velocity, high quality, and value is dependent upon your use instances and buyer expectations. OpenSearch Vector Engine offers complete tuning choices so you can also make good trade-offs to attain optimum outcomes tailor-made to your distinctive necessities.
You need to use the next tuning controls:
- Algorithms and parameters – This contains the next:
- Hierarchical Navigable Small World (HNSW) algorithm and parameters like
ef_search
,ef_construct
, andm
- Inverted File Index (IVF) algorithm and parameters like
nlist
andnprobes
- Precise k-nearest neighbors (k-NN), also called brute-force k-NN (BFKNN) algorithm
- Hierarchical Navigable Small World (HNSW) algorithm and parameters like
- Engines – Fb AI Similarity Search (FAISS), Lucene, and Non-metric Area Library (NMSLIB).
- Compression strategies – Scalar (comparable to byte and half precision), binary, and product quantization
- Similarity (distance) metrics – Inside product, cosine, L1, L2, and hamming
- Vector embedding sorts – Dense and sparse with variable dimensionality
- Rating and scoring strategies – Vector, hybrid (mixture of vector and Greatest Match 25 (BM25) scores), and multi-stage rating (comparable to cross-encoders and personalizers)
You’ll be able to modify a mixture of tuning controls to attain a various stability of velocity, high quality, and value that’s optimized to your wants. The next diagram offers a tough efficiency profiling for pattern configurations.
Tuning for disk-optimization
With OpenSearch 2.17+, you’ll be able to configure your k-NN indexes to run on disk mode for high-quality, low-cost vector search by buying and selling in-memory efficiency for greater latency. In case your use case is happy with ninetieth percentile (P90) latency within the vary of 100–200 milliseconds, disk mode is a wonderful choice so that you can obtain value financial savings whereas sustaining excessive search high quality. The next diagram illustrates disk mode’s efficiency profile amongst different engine configurations.
Disk mode was designed to expire of the field, decreasing your reminiscence necessities by 97% in comparison with reminiscence mode whereas offering excessive search high quality. Nonetheless, you’ll be able to tune compression and sampling charges to regulate for velocity, high quality, and value.
The next desk presents efficiency benchmarks for disk mode’s default settings. OpenSearch Benchmark (OSB) was used to run the primary three assessments, and VectorDBBench (VDBB) was used for the final two. Efficiency tuning finest practices have been utilized to attain optimum outcomes. The low scale assessments (Tasb-1M and Marco-1M) have been run on a single r7gd.massive information node with one duplicate. The opposite assessments have been run on two r7gd.2xlarge information nodes with one duplicate. The % value discount metric is calculated by evaluating an equal, right-sized in-memory deployment with the default settings.
These assessments are designed to display that disk mode can ship excessive search high quality with 32 occasions compression throughout quite a lot of datasets and fashions whereas sustaining our goal latency (underneath P90 200 milliseconds). These benchmarks aren’t designed for evaluating ML fashions. A mannequin’s impression on search high quality varies with a number of elements, together with the dataset.
Disk mode’s optimizations underneath the hood
Whenever you configure a k-NN index to run on disk mode, OpenSearch routinely applies a quantization method, compressing vectors as they’re loaded to construct a compressed index. By default, disk mode converts every full-precision vector—a sequence of a whole lot to hundreds of dimensions, every saved as 32-bit numbers—into binary vectors, which signify every dimension as a single-bit. This conversion leads to a 32 occasions compression price, enabling the engine to construct an index that’s 97% smaller than one composed of full-precision vectors. A right-sized cluster will maintain this compressed index in reminiscence.
Compression lowers value by decreasing the reminiscence required by the vector engine, nevertheless it sacrifices accuracy in return. Disk mode recovers accuracy, and subsequently search high quality, utilizing a two-step search course of. The primary section of the question execution begins by effectively traversing the compressed index in reminiscence for candidate matches. The second section makes use of these candidates to oversample corresponding full-precision vectors. These full-precision vectors are saved on disk in a format designed to scale back I/O and optimize disk retrieval velocity and effectivity. The pattern of full-precision vectors is then used to reinforce and re-score matches from section one (utilizing actual k-NN), thereby recovering the search high quality loss attributed to compression. Disk mode’s greater latency relative to reminiscence mode is attributed to this re-scoring course of, which requires disk entry and extra computation.
Early buyer successes
Clients are already operating the vector engine in disk mode. On this part, we share testimonials from early adopters.
Asana is bettering search high quality for purchasers on their work administration platform by phasing in semantic search capabilities by means of OpenSearch’s vector engine. They initially optimized the deployment through the use of product quantization to compress indexes by 16 occasions. By switching over to the disk-optimized configurations, they have been capable of doubtlessly scale back value by one other 33% whereas sustaining their search high quality and latency targets. These economics make it viable for Asana to scale to billions of vectors and democratize semantic search all through their platform.
DevRev bridges the basic hole in software program corporations by straight connecting customer-facing groups with builders. As an AI-centered platform, it creates direct pathways from buyer suggestions to product growth, serving to over 1,000 corporations speed up development with correct search, quick analytics, and customizable workflows. Constructed on massive language fashions (LLMs) and Retrieval Augmented Era (RAG) flows operating on OpenSearch’s vector engine, DevRev permits clever conversational experiences.
“With OpenSearch’s disk-optimized vector engine, we achieved our search high quality and latency targets with 16x compression. OpenSearch affords scalable economics for our multi-billion vector search journey.”
– Anshu Avinash, Head of AI and Search at DevRev.
Get began with disk mode on the OpenSearch Vector Engine
First, you could decide the sources required to host your index. Begin by estimating the reminiscence required to assist your disk-optimized k-NN index (with the default 32 occasions compression price) utilizing the next formulation:
Required reminiscence (bytes) = 1.1 x ((vector dimension rely)/8 + 8 x m) x (vector rely)
As an illustration, in case you use the defaults for Amazon Titan Textual content V2, your vector dimension rely is 1024. Disk mode makes use of the HNSW algorithm to construct indexes, so “m” is likely one of the algorithm parameters, and it defaults to 16. For those who construct an index for a 1-billion vector corpus encoded by Amazon Titan Textual content, your reminiscence necessities are 282 GB.
In case you have a throughput-heavy workload, you could be sure that your area has ample IOPs and CPUs as effectively. For those who comply with deployment finest practices, you should utilize occasion retailer and storage efficiency optimized occasion sorts, which can typically offer you ample IOPs. You must all the time carry out load testing for high-throughput workloads, and modify the unique estimates to accommodate for greater IOPs and CPU necessities.
Now you’ll be able to deploy an OpenSearch 2.17+ area that has been right-sized to your wants. Create your k-NN index with the mode parameter set to on_disk, after which ingest your information. If you have already got a k-NN index operating on the default in_memory
mode, you’ll be able to convert it by switching the mode to on_disk
adopted by a reindex process. After the index is rebuilt, you’ll be able to downsize your area accordingly.
Conclusion
On this put up, we mentioned how one can profit from operating the OpenSearch Vector Engine on disk mode, shared buyer success tales, and supplied you tips about getting began. You’re now set to run the OpenSearch Vector Engine at as little as a 3rd of the price.
To be taught extra, discuss with the documentation.
Concerning the Authors
Dylan Tong is a Senior Product Supervisor at Amazon Internet Providers. He leads the product initiatives for AI and machine studying (ML) on OpenSearch together with OpenSearch’s vector database capabilities. Dylan has a long time of expertise working straight with prospects and creating merchandise and options within the database, analytics and AI/ML area. Dylan holds a BSc and MEng diploma in Laptop Science from Cornell College.
Vamshi Vijay Nakkirtha is a software program engineering supervisor engaged on the OpenSearch Mission and Amazon OpenSearch Service. His main pursuits embody distributed methods.