High IDF doesn't always mean relevant search (daily search tips)


Rare terms have high inverse document frequency (IDF). BM25 scoring treats high IDF terms as more relevant.

Why?

We assume if a term occurs rarely in the corpus, it must unambiguously point to what the user wants. It’s specific.

But that’s not always true. Not all text is created equal. Corpuses violate this assumption frequently. Why?

  • No need to use a common term - Book titles may rarely mention the word “book”, but clearly “book” in a book index has low specificity.
  • Language gaps between corpus + users - a medical corpus may not mention “heart attack” much. It may use the technical term “myocardial infarction” frequently. Yet a novice searching for "heart attack" still has a fairly generic intent.
  • Tags + editorial controls - some fields don't have consistent editorial controls. For example, search teams might shove tags into search indices and hope for the best. Document frequency becomes a phenomenon of human inconsistencies, not natural language.

Teams currently work around these problems by:

  • Cross field or BM25F search: using a BM25 formulation that combines field statistics, importantly - the doc freq. So a more natural doc frequency for "book" might not be in a book's title field, but in its body
  • Merging text into an “all field”: using features like copy fields in lucene to just create a “text_all” for baseline matching. With everything merged we get more accurate stats
  • Manual overrides - it’s pretty cool that Vespa lets you manually specify term significance. I even once built a Solr plugin for managed stats to let you directly manipulate document frequency.

AI Powered Search training starts TOMORROW! Signup here:

http://maven.com/search-school/ai-powered-search

-Doug

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Reviewing Bayesian BM25 - a new approach to creating calibrated BM25 probabilities for hybrid search. I talk about this vs naive approaches I've used to do similar things. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

You may know BM25 lets you tune two parameters: k1: how quickly to saturate document term frequency’s contribution b: how much to bias towards below average length docs What you may NOT know is there is another parameter k3 What does k3 do? It handles repeated query terms. Old papers suggest k3=100 to 1000, which immediately saturates. That’s why Lucene ignores k3. It just uses the query term frequency. Some other search engines like Terrier set it to 8. So for the query, “Best dog toys for...

BM25 models the odds a term would be observed in a relevant document (vs the term occurring in an irrelevant doc). It’s based on probabilistic relevance, capturing: t - a query term match occurs R - the doc is relevant Queries of course contain multiple terms. How do we combine those odds? The odds of BOTH terms being in a relevant doc, we’d need to multiply Odds(t1) * Odds(t2). If we take the log of these multiplied odds, we can take advantage of a property of logarithms: log(Odds(t1) *...