|
Don't push complex ranking into the search engine. Layering in operation on top of plugin on top of who-knows-what-else harms user experience. Why? Tail latency In other words, in a distributed system, your query is as fast as your slowest node. A rare event for a single node becomes frequent on the full cluster. Consider a single node benchmark: p50 of 50 ms, p99 200 ms. Seems reasonable. With 100 nodes, on average one node hits p99 every request. The cluster must wait for this slow node to complete the request. Users experience the p99 (200 ms) every request. We make this worse when we add complexity. The system needs to page more memory, context switch threads, and occasionally take a winding path through IO. Node execution becomes unpredictable and burste. Now, perhaps, per-node p99 jumps to 1000 ms: The tail stretches. Since p99 of a node == p50 of a 100 node cluster, from the user’s perspective:
So the larger the cluster, the simpler you should keep first-pass retrieval. -Doug PS today at 12:30 PM prices increase for Cheat at Search with Agents: (http://maven.com/softwaredoug/cheat-at-search) Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: |
I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.
Good vector search means more than embeddings. Embeddings don’t know when a result matches / doesn’t match. Similarity floors don’t work consistently - a cutoff that works for one query might be disastrous for another. Even worse: your embedding usually can’t capture every little bit of meaning from your corpus. You need to efficiently pick the best top N candidates from your vector database. What do you need? Query Understanding - translating the query to domain language (categories, colors,...
Reciprocal Rank Fusion merges one system’s search ranking with another’s (ie lexical + embedding search). RRF scores a document with ∑1/rank of each underlying system. I’ve found RRF is not enough. Here’s the typical pattern I see on teams: A mature lexical solution exists. It’s pretty good, The team wants to add untuned, embedding based retrieval, They deploy a vector DB, and RRF embedding results with the mature system, Disaster ensues! The poor embedding results drag down the lexical...
Just sharing my post on Bayesian BM25 and other ways of normalizing BM25 scores. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia Do you have any thoughts on normalizing BM25 scores? -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: