word2vec isn’t just for words (daily search tip)


In my previous tip I introduced word2vec. I discussed it in terms of language: this word, mary, shared context with this other word, lamb, so their embeddings move closer.

Why constrain ourselves to language?

We could pretend that “Doug likes Star Wars” is the same kind of co-occurence. We can make a table of users to the movies they like:

Anchor Positive movie Negative movie
doug star wars king kong
doug star trek cinderella
tom star wars citizen kane
tom battlestar galactica the aviator

Think about what we have:

  • Doug and Tom’s embeddings grow closer through star wars. A word2vec training here shrinks the distance from Doug ←→Star Wars and Tom ←→ Star Wars, making Doug a more similar user to Tom.
  • In the same way, battlestar galactica moves closer to star trek through doug + tom

Thus now, we have a movie recommender system, through the same technology behind word2vec.

We could use this for quite a lot of domains:

  • Queries and documents
  • Images and captions

And so on!

-Doug

PS - 5 days left to signup for Cheat at Search with Agents!

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Good vector search means more than embeddings. Embeddings don’t know when a result matches / doesn’t match. Similarity floors don’t work consistently - a cutoff that works for one query might be disastrous for another. Even worse: your embedding usually can’t capture every little bit of meaning from your corpus. You need to efficiently pick the best top N candidates from your vector database. What do you need? Query Understanding - translating the query to domain language (categories, colors,...

Reciprocal Rank Fusion merges one system’s search ranking with another’s (ie lexical + embedding search). RRF scores a document with ∑1/rank of each underlying system. I’ve found RRF is not enough. Here’s the typical pattern I see on teams: A mature lexical solution exists. It’s pretty good, The team wants to add untuned, embedding based retrieval, They deploy a vector DB, and RRF embedding results with the mature system, Disaster ensues! The poor embedding results drag down the lexical...

Just sharing my post on Bayesian BM25 and other ways of normalizing BM25 scores. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia Do you have any thoughts on normalizing BM25 scores? -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: