Don't trust a judgment list


In search, judgment lists fall apart. It’s frankly, humbling and humiliating :)

The work isn’t whatever tech returns search results. The work is the measurement - the evaluation. Everything else becomes incidental in search.

Better evals has been my humbling lesson about from industry. Humbling because its not exciting or sexy, but because its hard. And its the work nobody wants to do. Everyone wants to do .

For this reason, almost everyone overreads the value of some naive judgment list method. Almost nobody wants to understand limitations. The casual judgment list our team put together by hand, the crowdsourced one from experts, the clickstream based. None of which I’d trust beyond well-scoped problems. It’s easy to cherry pick when one approach worked, but ignore when it failed. It’s harder to admit our systems of evaluation have deep, fundamental flaws no miracle fix can cure.

That all has to change if you train a model on user behaviors. Then this becomes the REAL work of search. It’s not whether you choose LambdaMART or a cross encoder or a bi encoder or 512 trained hamsters. The real modeling work is “does this floating point accurately describe whether a real user thinks this result is relevant or not”

That’s really hard, underappreciated, and not sexy work. If that interests you, please come hang out at Cheat at Search Essentials tomorrow and learn about evals with me and chat evals :) I’ll try to share the theory, the practice, and the practical, dumb, grug based systems that actually work

-Doug

PS: 6 days left for Cheat at Search with Agents - http://maven.com/softwaredoug/cheat-at-search

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Good vector search means more than embeddings. Embeddings don’t know when a result matches / doesn’t match. Similarity floors don’t work consistently - a cutoff that works for one query might be disastrous for another. Even worse: your embedding usually can’t capture every little bit of meaning from your corpus. You need to efficiently pick the best top N candidates from your vector database. What do you need? Query Understanding - translating the query to domain language (categories, colors,...

Reciprocal Rank Fusion merges one system’s search ranking with another’s (ie lexical + embedding search). RRF scores a document with ∑1/rank of each underlying system. I’ve found RRF is not enough. Here’s the typical pattern I see on teams: A mature lexical solution exists. It’s pretty good, The team wants to add untuned, embedding based retrieval, They deploy a vector DB, and RRF embedding results with the mature system, Disaster ensues! The poor embedding results drag down the lexical...

Just sharing my post on Bayesian BM25 and other ways of normalizing BM25 scores. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia Do you have any thoughts on normalizing BM25 scores? -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: