|
In search, judgment lists fall apart. It’s frankly, humbling and humiliating :) The work isn’t whatever tech returns search results. The work is the measurement - the evaluation. Everything else becomes incidental in search. Better evals has been my humbling lesson about from industry. Humbling because its not exciting or sexy, but because its hard. And its the work nobody wants to do. Everyone wants to do . For this reason, almost everyone overreads the value of some naive judgment list method. Almost nobody wants to understand limitations. The casual judgment list our team put together by hand, the crowdsourced one from experts, the clickstream based. None of which I’d trust beyond well-scoped problems. It’s easy to cherry pick when one approach worked, but ignore when it failed. It’s harder to admit our systems of evaluation have deep, fundamental flaws no miracle fix can cure. That all has to change if you train a model on user behaviors. Then this becomes the REAL work of search. It’s not whether you choose LambdaMART or a cross encoder or a bi encoder or 512 trained hamsters. The real modeling work is “does this floating point accurately describe whether a real user thinks this result is relevant or not” That’s really hard, underappreciated, and not sexy work. If that interests you, please come hang out at Cheat at Search Essentials tomorrow and learn about evals with me and chat evals :) I’ll try to share the theory, the practice, and the practical, dumb, grug based systems that actually work
-Doug PS: 6 days left for Cheat at Search with Agents - http://maven.com/softwaredoug/cheat-at-search Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: |
I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.
Reviewing Bayesian BM25 - a new approach to creating calibrated BM25 probabilities for hybrid search. I talk about this vs naive approaches I've used to do similar things. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:
You may know BM25 lets you tune two parameters: k1: how quickly to saturate document term frequency’s contribution b: how much to bias towards below average length docs What you may NOT know is there is another parameter k3 What does k3 do? It handles repeated query terms. Old papers suggest k3=100 to 1000, which immediately saturates. That’s why Lucene ignores k3. It just uses the query term frequency. Some other search engines like Terrier set it to 8. So for the query, “Best dog toys for...
Rare terms have high inverse document frequency (IDF). BM25 scoring treats high IDF terms as more relevant. Why? We assume if a term occurs rarely in the corpus, it must unambiguously point to what the user wants. It’s specific. But that’s not always true. Not all text is created equal. Corpuses violate this assumption frequently. Why? No need to use a common term - Book titles may rarely mention the word “book”, but clearly “book” in a book index has low specificity. Language gaps between...