profile

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Featured Post

Blog post - can BM25 be a probability?

Reviewing Bayesian BM25 - a new approach to creating calibrated BM25 probabilities for hybrid search. I talk about this vs naive approaches I've used to do similar things. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

You may know BM25 lets you tune two parameters: k1: how quickly to saturate document term frequency’s contribution b: how much to bias towards below average length docs What you may NOT know is there is another parameter k3 What does k3 do? It handles repeated query terms. Old papers suggest k3=100 to 1000, which immediately saturates. That’s why Lucene ignores k3. It just uses the query term frequency. Some other search engines like Terrier set it to 8. So for the query, “Best dog toys for...

Rare terms have high inverse document frequency (IDF). BM25 scoring treats high IDF terms as more relevant. Why? We assume if a term occurs rarely in the corpus, it must unambiguously point to what the user wants. It’s specific. But that’s not always true. Not all text is created equal. Corpuses violate this assumption frequently. Why? No need to use a common term - Book titles may rarely mention the word “book”, but clearly “book” in a book index has low specificity. Language gaps between...

BM25 models the odds a term would be observed in a relevant document (vs the term occurring in an irrelevant doc). It’s based on probabilistic relevance, capturing: t - a query term match occurs R - the doc is relevant Queries of course contain multiple terms. How do we combine those odds? The odds of BOTH terms being in a relevant doc, we’d need to multiply Odds(t1) * Odds(t2). If we take the log of these multiplied odds, we can take advantage of a property of logarithms: log(Odds(t1) *...

If pointwise evals asks “How relevant is this from 1-5” - pairwise search evals says “Which of these two results is more relevant - X or Y?” Comparing two items at a time has some advantages: Less chance for per-decision error - harder to screw up one is better than another More precise results - fine grain details that can’t be shoved into a 1-5 scale Faster decisions - comparisons often can be made quicker However, two major downsides remain Pairwise evals take more time - instead of rating...

In the previous tip, we discussed how pointwise 1-5 labels fall apart. The expert rater gives only nit-picky ratings, way beyond the considerations of actual users. A naive rater has little knowledge of the domain, and may tend to consider most results relevant. How do we handle this situation? We handle it by using multiple raters for the same document. We can’t rely on just one! Then, when we have enough ratings, we can use a metric like Fleiss’s Kappa to measure whether raters tend to...

On Tuesday, Nathan VanBenschoten of Turbopuffer will share how to they scaled to 100B scale vector search in object storage. If you don't know Turbopuffer, it's probably the fastest growing vector db / search engine in the last year. Used by Notion, Cursor, Anthropic for vector retrieval. They particularly thrive in high-scale scenarios relative to the cost. In this talk Nathan shares their journey how they scaled up to 100B while keeping latency and cost reasonable. Signup here -...

A judgment list labels a document as relevant / irrelevant for a query. So you get a label, say 1-5 for how relevant the movie First Blood is for the query Rambo. Here’s what happens though in practice: First, a rater see Rambo III - they give it a rating of 5 / 5 Next they see First Blood, the original Rambo movie, they also rate it 5/5 That rater might reflect - wait should I go back and adjust my original rating for the sequel? Even with careful coaching, raters often use inconsistent...

I mentioned my experience with Shopify merchants that controlled their own search quality. They manually outperformed our best algorithms. The average Shopify store is a specific case: A handful of popular queries drive outsized sales Catalogs tend to be smaller Sellers create new products in niche domains Contrast this with the general Amazon-like megastore billions of unique queries long tail queries huge catalog catalog constantly changes generic products Is search management useful here?...

You built a pretty good query understanding solution. It’s an improvement. You have to ship tomorrow One problem, the query: purple mattress. Turns out that’s not a mattress colored purple. It’s a brand named purple. But our otherwise smart query understanding solution sees purple as a color. You have to ship tomorrow. And this is a pretty popular query. Do you (a) Do the right ML thing: try to train a better model to fix it? (b) Just accept models will be imperfect and add manual exceptions?...