|
Its convenient to have a lexical score normalized from 0-1. Sadly BM25 scores tend to be all over the place (0.5? 5.1? 12.51?). Fine for ranking. Annoying for other goals. That's why I wrote a post about one way to compute probabilities from BM25. In that post, I allude to one hack that forces BM25 to 0-1. Let's walk through it. A query term’s BM25 score is IDF * TF. Lucene’s TF is already normalized Lucene drops the (k1 + 1) in the numerator of BM25, giving you: Now we’ve got a TF term bounded from 0-1. Nice. Now we’ve got to tackle IDF. Here’s the standard IDF. Here N: num docs in corpus; n: doc frequency of this term. Turns out the max value of this function is log(N). So! just slap a log(N) denominator under that sucker. Now that too has become 0-1. BM25 now ranges 0-1, while preserving in-query ranking. It’s 100% a hack. What’s nice or problematic about this:
Use with care. -Doug AI Powered Search training - late signup available - http://maven.com/search-school/ai-powered-search Events · Consulting · Training (use code search-tips) You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile: |
I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.
Talks this week + other events. Hope you can make it and help keep this community awesome 😎 First - Leonie Monigatti will share how Context Engineering IS Agentic Search. Tuesday, 10:30AM ET What everyone is missing: when people talk aobut "context engineering", they really ought to be improving search. Leonie will cast aside myths about context engineering to show how agents build their own context via retrieval. And THAT, not prompting magic, decides whether your AI app is successful....
Late interaction is having a moment. The team at LightOn - including superstar developer Antoine Chaffin - has demonstrated how a 150M(!) late interaction model beats much larger models - some up to 8B parameters. David beats Goliath! Better search only cost you less! Tested on what dataset? BrowseComp. BrowseComp asks difficult questions requiring detailed, complex research. Tasks you can imagine agents chugging away, searching, getting frustrated and lost. Here’s an example prompt / answer...
How do teams choose vector databases / search engines? People wrack their brains between Elasticsearch/OpenSearch/Solr/Vespa/Pinecone/Turbopuffer/Weaviate/…? First things first - DO NOT start with a feature matrix. Start with the simple question: What is my team most comfortable with? That’s the default. If everyone can go deep in one system, don’t overcomplicate the decision. It might be good enough to stop here. NEXT - consider the high-level characteristics of the project. Use these as...