Spreadsheet implementation of word2vec (daily search tip)


Here’s a fun spreadsheet that implements word2vec. Use it for jumping off point.

It has:

  • A single small vocabulary of 9 words
  • A single example mary had a little lamb
  • We move positive vectors closer together (mary is IN the context window of had); We move negative vectors farther apart (toenail is NOT IN the context window of mary)

In word2vec we maintain two embeddings per vocabulary entry (input and output) vectors.

They mean subtly different things:

  • Let’s say two inputs are similar, mary and jane.This says: when mary and jane act as centers, they co-occur with similar in-context words. Maybe poppins?
  • Let’s say two outputs are similar: little and lamb . This says: they often appear near the same center words, ie mary

In the end, they’ll be very correlated. But most people take the input vectors after training

The spreadsheet models two word2vec variants

  • Softmax: A model to predict probability of an adjacent word directly, given a center word. IE poppins near mary would have a reasonably high probability compared to monkeys near mary. Then backprop to the embeddings so it predicts these probabilities more accurately. Sadly, when you get to actual vocabularies of 100k-1m+, doing this during training becomes infeasible
  • Skipgram, negative sampling: Learn on samples: tweak dot products of one positive to the center word (ie poppins closer to mary) - but push farther from out-of-context words, like mary to toenail . This method scales better and is more common

Further reading

-Doug

PS - 3 days left to signup for Cheat at Search with Agents!

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Reviewing Bayesian BM25 - a new approach to creating calibrated BM25 probabilities for hybrid search. I talk about this vs naive approaches I've used to do similar things. Enjoy! https://softwaredoug.com/blog/2026/03/06/probabilistic-bm25-utopia -Doug Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

You may know BM25 lets you tune two parameters: k1: how quickly to saturate document term frequency’s contribution b: how much to bias towards below average length docs What you may NOT know is there is another parameter k3 What does k3 do? It handles repeated query terms. Old papers suggest k3=100 to 1000, which immediately saturates. That’s why Lucene ignores k3. It just uses the query term frequency. Some other search engines like Terrier set it to 8. So for the query, “Best dog toys for...

Rare terms have high inverse document frequency (IDF). BM25 scoring treats high IDF terms as more relevant. Why? We assume if a term occurs rarely in the corpus, it must unambiguously point to what the user wants. It’s specific. But that’s not always true. Not all text is created equal. Corpuses violate this assumption frequently. Why? No need to use a common term - Book titles may rarely mention the word “book”, but clearly “book” in a book index has low specificity. Language gaps between...