|
With embedding similarity you train with an anchor, a positive, and a negative. You want to move the positive's embeddings closer to the anchor's, while moving negative's farther apart. Enter good ole word2vec
From just the context, “mary had a little lamb”, we might have: ANCHOR POSITIVE NEGATIVE
mary little toenail
mary lamb banana
Over many passages, you might imagine each of these might become more similar to mary:
Importantly, these embeddings just know they shared context. They appear within a few words of each other. They do not act as language models
The articles topic about Disney? A language model knows the next token after mary is more likely to be poppins. But word2vec just as easily chooses nursery rhyme, church, and other “mary” themes. -Doug PS - 7 days left to signup for Cheat at Search with Agents! Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: |
I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.
Good vector search means more than embeddings. Embeddings don’t know when a result matches / doesn’t match. Similarity floors don’t work consistently - a cutoff that works for one query might be disastrous for another. Even worse: your embedding usually can’t capture every little bit of meaning from your corpus. You need to efficiently pick the best top N candidates from your vector database. What do you need? Query Understanding - translating the query to domain language (categories, colors,...
Reciprocal Rank Fusion merges one system’s search ranking with another’s (ie lexical + embedding search). RRF scores a document with ∑1/rank of each underlying system. I’ve found RRF is not enough. Here’s the typical pattern I see on teams: A mature lexical solution exists. It’s pretty good, The team wants to add untuned, embedding based retrieval, They deploy a vector DB, and RRF embedding results with the mature system, Disaster ensues! The poor embedding results drag down the lexical...
Just sharing my post on Bayesian BM25 and other ways of normalizing BM25 scores. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia Do you have any thoughts on normalizing BM25 scores? -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: