Evaluating search: free class Tuesday


Final free class Tuesday:

Tuesday will be the last Cheat at Search Essentials before we begin Cheat at Search with Agents next week

Have you ever had to evaluate a search application? To figure out if its satisfying users? To distinguish your team's opinion from what users actually click on?

You may hear terms like "NDCG" or "judgment list". WTF is that? If you want to backfill some basics, understand search evaluation, come to the final Cheat at Search Essentials class on Tuesday. I'll define these core concepts while also giving opinions of my own of what you should do.

Feb's paid course:

Please consider signing up for my paid training in Feb. I'm grateful for any support! It lets me continue this newsletter, my blog, and free speaking.

To that end, coming to Tuesday's class is the final discount for Feb's Cheat at Search with Agents. Prices will go up after Feb 2nd.

What is Cheat at Search with Agents? I'll be teaching what has become significant themes in 2026 using Agents + LLMs in search:

  • LLM Query Understanding
  • Agentic search + RAG - an agent in the loop of a search request
  • Agentic relevance tuning - an agent generating code + other artifacts to help tune a relevance algorithm

Best,

-Doug

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Good vector search means more than embeddings. Embeddings don’t know when a result matches / doesn’t match. Similarity floors don’t work consistently - a cutoff that works for one query might be disastrous for another. Even worse: your embedding usually can’t capture every little bit of meaning from your corpus. You need to efficiently pick the best top N candidates from your vector database. What do you need? Query Understanding - translating the query to domain language (categories, colors,...

Reciprocal Rank Fusion merges one system’s search ranking with another’s (ie lexical + embedding search). RRF scores a document with ∑1/rank of each underlying system. I’ve found RRF is not enough. Here’s the typical pattern I see on teams: A mature lexical solution exists. It’s pretty good, The team wants to add untuned, embedding based retrieval, They deploy a vector DB, and RRF embedding results with the mature system, Disaster ensues! The poor embedding results drag down the lexical...

Just sharing my post on Bayesian BM25 and other ways of normalizing BM25 scores. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia Do you have any thoughts on normalizing BM25 scores? -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: