This publish is cowritten by Ishan Gupta, Co-Founder and Chief Know-how Officer, Juicebox.
Juicebox is an AI-powered expertise sourcing search engine, utilizing superior pure language fashions to assist recruiters establish one of the best candidates from an unlimited dataset of over 800 million profiles. On the core of this performance is Amazon OpenSearch Service, which supplies the spine for Juicebox’s highly effective search infrastructure, enabling a seamless mixture of conventional full-text search strategies with trendy, cutting-edge semantic search capabilities.
On this publish, we share how Juicebox makes use of OpenSearch Service for improved search.
Challenges in recruiting search
Recruiting serps historically depend on easy Boolean or keyword-based searches. These strategies aren’t efficient in capturing the nuance and intent behind complicated queries, typically resulting in giant volumes of irrelevant outcomes. Recruiters spend pointless time filtering by way of these outcomes, a course of that’s each time-consuming and inefficient.
As well as, recruiting serps typically wrestle to scale with giant datasets, creating latency points and efficiency bottlenecks as extra information is listed. At Juicebox, with a database rising to greater than 1 billion paperwork and hundreds of thousands of profiles being searched per minute, we wanted an answer that might not solely deal with massive-scale information ingestion and querying, but additionally help contextual understanding of complicated queries.
Resolution overview
The next diagram illustrates the answer structure.
OpenSearch Service securely unlocks real-time search, monitoring, and evaluation of enterprise and operational information to be used instances like utility monitoring, log analytics, observability, and web site search. You ship search paperwork to OpenSearch Service and retrieve them with search queries that match textual content and vector embeddings for quick, related outcomes.
At Juicebox, we solved 5 challenges with Amazon OpenSearch Service, which we focus on within the following sections.
Downside 1: Excessive latency in candidate search
Initially, we confronted vital delays in returning search outcomes as a result of scale of our dataset, particularly for complicated semantic queries that require deep contextual understanding. Different full-text serps couldn’t meet our necessities for velocity or relevance when it got here to understanding recruiter intent behind every search.
Resolution: BM25 for quick, correct full-text search
The OpenSearch Service BM25 algorithm rapidly proved invaluable by permitting Juicebox to optimize full-text search efficiency whereas sustaining accuracy. By means of key phrase relevance scoring, BM25 helps rank profiles based mostly on the chance that they match the recruiter’s question. This optimization decreased our common question latency from round 700 milliseconds to 250 milliseconds, permitting recruiters to retrieve related profiles a lot quicker than our earlier search implementation.
With BM25, we noticed an almost threefold discount in latency for keyword-based searches, enhancing the general search expertise for our customers.
Downside 2: Matching intent, not simply key phrases
In recruiting, precise key phrase matching can typically result in lacking out on certified candidates. A recruiter on the lookout for “information scientists with NLP expertise” would possibly miss candidates with “machine studying” of their profiles, despite the fact that they’ve the best experience.
Resolution: k-NN-powered vector seek for semantic understanding
To handle this, Juicebox makes use of k-nearest neighbor (k-NN) vector search. Vector embeddings permit the system to grasp the context behind recruiter queries and match candidates based mostly on semantic which means, not simply key phrase matches. We keep a billion-scale vector search index that’s able to performing low-latency k-NN search, due to OpenSearch Service optimizations like product quantization capabilities. The neural search functionality allowed us to construct a Retrieval Augmented Era (RAG) pipeline for embedding pure language queries earlier than looking out. OpenSearch Service permits us to optimize algorithm hyperparameters for Hidden Navigable Small Worlds (HNSW) like m
, ef_search
, and ef_construction
. This enabled us to realize our goal latency, recall, and price objectives.
Semantic search, powered by k-NN, allowed us to floor 35% extra related candidates in comparison with keyword-only searches for complicated queries. The velocity of those searches was nonetheless quick and correct, with vectorized queries attaining a 0.9+ recall.
Downside 3: Problem in benchmarking machine studying fashions
There are a number of key efficiency indicators (KPIs) that measure the success of your search. Once you use vector embeddings, you could have plenty of decisions to make when deciding on the mannequin, fine-tuning the mannequin, and selecting the hyperparameters to make use of. It’s worthwhile to benchmark your resolution to just remember to’re getting the best latency, value, and particularly accuracy. Benchmarking machine studying (ML) fashions for recall and efficiency is difficult as a result of huge variety of fast-evolving fashions out there (corresponding to MTEB leaderboard on Hugging Face). We confronted difficulties in deciding on and measuring fashions precisely whereas ensuring we carried out nicely throughout large-scale datasets.
Resolution: Precise k-NN with scoring script in OpenSearch Service
Juicebox used precise k-NN with scoring script options to deal with these challenges. This characteristic permits for exact benchmarking by executing brute-force nearest neighbor searches and making use of filters to a subset of vectors, ensuring that recall metrics are correct. Mannequin testing was streamlined utilizing the wide selection of pre-trained fashions and ML connectors (built-in with Amazon Bedrock and Amazon SageMaker) supplied by OpenSearch Service. The flexibleness of making use of filtering and customized scoring scripts helped us consider a number of fashions throughout high-dimensional datasets with confidence.
Juicebox was capable of measure mannequin efficiency with fine-grained management, attaining 0.9+ recall. Using precise k-NN allowed Juicebox to benchmark quicker and reliably, even on billion-scale information, offering the arrogance wanted for mannequin choice.
Downside 4: Lack of data-driven insights
Recruiters must not solely discover candidates, but additionally achieve insights into broader expertise business tendencies. Analyzing a whole bunch of hundreds of thousands of profiles to establish tendencies in expertise, geographies, and industries was computationally intensive. Most different serps that help full-text search or k-NN search didn’t help aggregations.
Resolution: Superior aggregations with OpenSearch Service
The highly effective aggregation options of OpenSearch Service allowed us to construct Expertise Insights, a characteristic that gives recruiters with actionable insights from aggregated information. By performing large-scale aggregations throughout hundreds of thousands of profiles, we recognized key expertise and hiring tendencies, and helped purchasers alter their sourcing methods.
Aggregation queries now run on over 100 million profiles and return ends in below 800 milliseconds, permitting recruiters to generate insights immediately.
Downside 5: Streamlining information ingestion and indexing
Juicebox ingests information repeatedly from a number of sources throughout the online, reaching terabytes of latest information per 30 days. We would have liked a sturdy information pipeline to ingest, index, and question this information at scale with out efficiency degradation.
Resolution: Scalable information ingestion with Amazon OpenSearch Ingestion pipelines
Utilizing Amazon OpenSearch Ingestion, we applied scalable pipelines. This allowed us to effectively course of and index a whole bunch of hundreds of thousands of profiles each month with out worrying about pipeline failures or system bottlenecks. We used AWS Glue to preprocess information from a number of sources, chunk it for optimum processing, and feed it into our indexing pipeline.
Conclusion
On this publish, we shared how Juicebox makes use of OpenSearch Service for improved search. We will now index a whole bunch of hundreds of thousands of profiles per 30 days, protecting our information contemporary and updated, whereas sustaining real-time availability for searches.
Concerning the authors
Ishan Gupta is the Co-Founder and CTO of Juicebox, an AI-powered recruiting software program startup backed by high Silicon Valley buyers together with Y Combinator, Nat Friedman, and Daniel Gross. He has constructed search merchandise utilized by 1000’s of shoppers to recruit expertise for his or her groups.
Jon Handler is the Director of Options Structure for Search Providers at Amazon Net Providers, based mostly in Palo Alto, CA. Jon works intently with OpenSearch and Amazon OpenSearch Service, offering assist and steering to a broad vary of shoppers who’ve search and log analytics workloads for OpenSearch. Previous to becoming a member of AWS, Jon’s profession as a software program developer included 4 years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the College of Pennsylvania, and a Grasp of Science and a Ph. D. in Pc Science and Synthetic Intelligence from Northwestern College.