Why do single vector representations fail? (daily search tip)


This week we’ll talk a bit about late interaction. But to get there, we need to think about why single vector representations fail.

Let’s think about restaurants.

Here’s an article reviewing local restaurants. I have three Italian restaurants and two Chinese ones.

What’s the average of these? Russian or something!? Maybe Middle Eastern food?

If my document lists these restaurants, then that’s exactly what I’ll get in a single vector encoding. A confusing muddle somewhere in the middle of every cuisine.

The user comes along looking for “best Italian restaurant in my town” and they don’t get my document listing 3 Italian and 2 Chinese. Because the cosine similarity between the query, “italian restaurant,” and this document, lost somewhere in the Middle East, has become so low.

As documents grow in complexity, the problem only worsens

This sort of failure mode happens all the time with embeddings, where forever reason the whole washes out the parts.

In any information-heavy search, the tension between retrieving the whole and narrowing in on the particular, sometimes diverse, facts in a document becomes stronger. And that’s why this week we’ll learn one approach: late interaction!

-Doug

PS today, 12:30PM ET is the last day to sign up for Cheat at Search with Agents: http://maven.com/softwaredoug/cheat-at-search

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Reviewing Bayesian BM25 - a new approach to creating calibrated BM25 probabilities for hybrid search. I talk about this vs naive approaches I've used to do similar things. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

You may know BM25 lets you tune two parameters: k1: how quickly to saturate document term frequency’s contribution b: how much to bias towards below average length docs What you may NOT know is there is another parameter k3 What does k3 do? It handles repeated query terms. Old papers suggest k3=100 to 1000, which immediately saturates. That’s why Lucene ignores k3. It just uses the query term frequency. Some other search engines like Terrier set it to 8. So for the query, “Best dog toys for...

Rare terms have high inverse document frequency (IDF). BM25 scoring treats high IDF terms as more relevant. Why? We assume if a term occurs rarely in the corpus, it must unambiguously point to what the user wants. It’s specific. But that’s not always true. Not all text is created equal. Corpuses violate this assumption frequently. Why? No need to use a common term - Book titles may rarely mention the word “book”, but clearly “book” in a book index has low specificity. Language gaps between...