Late interaction is having a moment. The team at LightOn - including superstar developer Antoine Chaffin - has demonstrated how a 150M(!) late interaction model beats much larger models - some up to 8B parameters. David beats Goliath! Better search only cost you less! Tested on what dataset? BrowseComp. BrowseComp asks difficult questions requiring detailed, complex research. Tasks you can imagine agents chugging away, searching, getting frustrated and lost. Hereâs an example prompt / answer...
about 1 month ago â˘Â 1 min read
How do teams choose vector databases / search engines? People wrack their brains between Elasticsearch/OpenSearch/Solr/Vespa/Pinecone/Turbopuffer/Weaviate/âŚ? First things first - DO NOT start with a feature matrix. Start with the simple question: What is my team most comfortable with? Thatâs the default. If everyone can go deep in one system, donât overcomplicate the decision. It might be good enough to stop here. NEXT - consider the high-level characteristics of the project. Use these as...
about 1 month ago â˘Â 1 min read
Hey all, I've been doing these "daily search tips" since the end of last year. I enjoy putting them together, and have begun archiving them on my site. I'm curious if you find them valuable? If you can, could you please reply and say: Whether you're getting value One thing you like One thing you don't like (Of course I encourage unsubscribes if not useful, see below) Cheers, -Doug Events ¡ Consulting ¡ Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips...
about 1 month ago â˘Â 1 min read
A user searches for red shoes, they click on some products. Now you have a set of relevant products. Great! But what can you do with it? You could literally memorize these amazing results and show them to future users. Maybe thatâs the right thing to do. But we can take it up a notch. Now imagine those products have attributes. Like color. We observe: eighty percent of red shoeâs clicked products have colors ['red', 'maroon', 'pink', 'rose'] . Now youâve understood something about red shoes ....
about 1 month ago â˘Â 1 min read
Look at this math and grasp at its majesty: P(R) = P(R | BM25) * P(R | Emb) # lexical * embedding OK whatâs so special about that? Thatâs an AND. A probabilistic way of combining scores so that when BOTH âthings happenâ, the final result becomes true. What Bayesian BM25 does, as explained in my blog article, is calibrate BM25 scores so they become meaningful probabilities. For your labeled dataset: A âmeh resultâ BM25 â map to P=0.5 Whatâs a âgood resultsâ BM25 score â map that to a 1.0 Once...
about 1 month ago â˘Â 1 min read
Good vector search means more than embeddings. Embeddings donât know when a result matches / doesnât match. Similarity floors donât work consistently - a cutoff that works for one query might be disastrous for another. Even worse: your embedding usually canât capture every little bit of meaning from your corpus. You need to efficiently pick the best top N candidates from your vector database. What do you need? Query Understanding - translating the query to domain language (categories, colors,...
about 1 month ago â˘Â 1 min read
Reciprocal Rank Fusion merges one systemâs search ranking with anotherâs (ie lexical + embedding search). RRF scores a document with â1/rank of each underlying system. Iâve found RRF is not enough. Hereâs the typical pattern I see on teams: A mature lexical solution exists. Itâs pretty good, The team wants to add untuned, embedding based retrieval, They deploy a vector DB, and RRF embedding results with the mature system, Disaster ensues! The poor embedding results drag down the lexical...
about 1 month ago â˘Â 1 min read
Just sharing my post on Bayesian BM25 and other ways of normalizing BM25 scores. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia Do you have any thoughts on normalizing BM25 scores? -Doug Events ¡ Consulting ¡ Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:
about 2 months ago â˘Â 1 min read
Its convenient to have a lexical score normalized from 0-1. Sadly BM25 scores tend to be all over the place (0.5? 5.1? 12.51?). Fine for ranking. Annoying for other goals. That's why I wrote a post about one way to compute probabilities from BM25. In that post, I allude to one hack that forces BM25 to 0-1. Let's walk through it. A query termâs BM25 score is IDF * TF. Luceneâs TF is already normalized Lucene drops the (k1 + 1) in the numerator of BM25, giving you: Now weâve got a TF term...
about 2 months ago â˘Â 1 min read