Embedding triplet training - know word2vec (daily search tip)


With embedding similarity you train with an anchor, a positive, and a negative. You want to move the positive's embeddings closer to the anchor's, while moving negative's farther apart.

Enter good ole word2vec

  • Every word in the vocabulary starts with its own random embedding
  • When a word co-occurs with another word, its a positive (training moves them together)
  • A random word, sampled out of context, is a negative (training pushes them apart)

From just the context, “mary had a little lamb”, we might have:

ANCHOR POSITIVE NEGATIVE
mary little toenail
mary lamb banana

Over many passages, you might imagine each of these might become more similar to mary:

  • mary + lamb
  • mary + church
  • bloody + mary
  • mary + poppins

Importantly, these embeddings just know they shared context. They appear within a few words of each other. They do not act as language models

  • Language models use the entire document as context, here context is binary in / out (either co-occurs if within a few tokens, or doesn’t count)
  • Language models use a transformer architecture that weighs long-range relationships between this token and other, distant tokens

The articles topic about Disney? A language model knows the next token after mary is more likely to be poppins. But word2vec just as easily chooses nursery rhyme, church, and other “mary” themes.

-Doug

PS - 7 days left to signup for Cheat at Search with Agents!

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Reviewing Bayesian BM25 - a new approach to creating calibrated BM25 probabilities for hybrid search. I talk about this vs naive approaches I've used to do similar things. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

You may know BM25 lets you tune two parameters: k1: how quickly to saturate document term frequency’s contribution b: how much to bias towards below average length docs What you may NOT know is there is another parameter k3 What does k3 do? It handles repeated query terms. Old papers suggest k3=100 to 1000, which immediately saturates. That’s why Lucene ignores k3. It just uses the query term frequency. Some other search engines like Terrier set it to 8. So for the query, “Best dog toys for...

Rare terms have high inverse document frequency (IDF). BM25 scoring treats high IDF terms as more relevant. Why? We assume if a term occurs rarely in the corpus, it must unambiguously point to what the user wants. It’s specific. But that’s not always true. Not all text is created equal. Corpuses violate this assumption frequently. Why? No need to use a common term - Book titles may rarely mention the word “book”, but clearly “book” in a book index has low specificity. Language gaps between...