Know where search management hits limits (daily search tip)


I mentioned my experience with Shopify merchants that controlled their own search quality. They manually outperformed our best algorithms.

The average Shopify store is a specific case:

  • A handful of popular queries drive outsized sales
  • Catalogs tend to be smaller
  • Sellers create new products in niche domains

Contrast this with the general Amazon-like megastore

  • billions of unique queries
  • long tail queries
  • huge catalog
  • catalog constantly changes
  • generic products

Is search management useful here?

Yes but focused on:

  • Edge cases - the places general search models fail
  • Head queries - very popular queries that absolutely must give right answers
  • Not mapping queries to products, but instead mapping to attributes of the ideal product (a category, an image embedding, etc) to ensure they stay general
  • Vigilant measurement - the increased user traffic makes it more plausible to measure the impact of rules
  • Retiring agressively - force teams to frequently review every instantiation of a manual intervention in search at some period

In this way, you’re patching the important problems, but watching it like a hawk

-Doug

Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Talks this week + other events. Hope you can make it and help keep this community awesome 😎 First - Leonie Monigatti will share how Context Engineering IS Agentic Search. Tuesday, 10:30AM ET What everyone is missing: when people talk aobut "context engineering", they really ought to be improving search. Leonie will cast aside myths about context engineering to show how agents build their own context via retrieval. And THAT, not prompting magic, decides whether your AI app is successful....

Late interaction is having a moment. The team at LightOn - including superstar developer Antoine Chaffin - has demonstrated how a 150M(!) late interaction model beats much larger models - some up to 8B parameters. David beats Goliath! Better search only cost you less! Tested on what dataset? BrowseComp. BrowseComp asks difficult questions requiring detailed, complex research. Tasks you can imagine agents chugging away, searching, getting frustrated and lost. Here’s an example prompt / answer...

How do teams choose vector databases / search engines? People wrack their brains between Elasticsearch/OpenSearch/Solr/Vespa/Pinecone/Turbopuffer/Weaviate/…? First things first - DO NOT start with a feature matrix. Start with the simple question: What is my team most comfortable with? That’s the default. If everyone can go deep in one system, don’t overcomplicate the decision. It might be good enough to stop here. NEXT - consider the high-level characteristics of the project. Use these as...