Evaluating search: free class Tuesday


Final free class Tuesday:

Tuesday will be the last Cheat at Search Essentials before we begin Cheat at Search with Agents next week

Have you ever had to evaluate a search application? To figure out if its satisfying users? To distinguish your team's opinion from what users actually click on?

You may hear terms like "NDCG" or "judgment list". WTF is that? If you want to backfill some basics, understand search evaluation, come to the final Cheat at Search Essentials class on Tuesday. I'll define these core concepts while also giving opinions of my own of what you should do.

Feb's paid course:

Please consider signing up for my paid training in Feb. I'm grateful for any support! It lets me continue this newsletter, my blog, and free speaking.

To that end, coming to Tuesday's class is the final discount for Feb's Cheat at Search with Agents. Prices will go up after Feb 2nd.

What is Cheat at Search with Agents? I'll be teaching what has become significant themes in 2026 using Agents + LLMs in search:

  • LLM Query Understanding
  • Agentic search + RAG - an agent in the loop of a search request
  • Agentic relevance tuning - an agent generating code + other artifacts to help tune a relevance algorithm

Best,

-Doug

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Reviewing Bayesian BM25 - a new approach to creating calibrated BM25 probabilities for hybrid search. I talk about this vs naive approaches I've used to do similar things. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

You may know BM25 lets you tune two parameters: k1: how quickly to saturate document term frequency’s contribution b: how much to bias towards below average length docs What you may NOT know is there is another parameter k3 What does k3 do? It handles repeated query terms. Old papers suggest k3=100 to 1000, which immediately saturates. That’s why Lucene ignores k3. It just uses the query term frequency. Some other search engines like Terrier set it to 8. So for the query, “Best dog toys for...

Rare terms have high inverse document frequency (IDF). BM25 scoring treats high IDF terms as more relevant. Why? We assume if a term occurs rarely in the corpus, it must unambiguously point to what the user wants. It’s specific. But that’s not always true. Not all text is created equal. Corpuses violate this assumption frequently. Why? No need to use a common term - Book titles may rarely mention the word “book”, but clearly “book” in a book index has low specificity. Language gaps between...