Can agents replace your search stack?


Give an agent a set of search tools, it finds relevant products and improves result ranking. So should we throw away our traditional search stack and just let an agent drive some retrievers? Will the future of search APIs just be an agent, not query understanding or reranking?

Here's the rub - finding things with agent's help differs from helping agents find information. In one case, the agent helps us. In the other, we must help the agent find what it doesn't know. This last case can't work without traditional retrieval work.

More in my latest blog - https://softwaredoug.com/blog/2026/04/28/search-apis-replaced-by-agents

Best,

-Doug

PS 3 weeks away from my Cheat at Search w/ Agents class - use code search-tips for a discount.

Slack Community * Events · Consulting · Training (use code search-tips)

You're subscribed to Doug Turnbull's daily search tips where I share tips, blog articles, events, and more. You can always manage your profile:

Doug Turnbull

I share search tips, blog articles, and free events I'm hosting about the search+retreval industry, vector databases, information retrieval and more.

Read more from Doug Turnbull

Search community happenings for this week! 2026 is the year of agentic search w/ Jo Kristian Bergum Thursday April 30th - https://maven.com/p/a4f265/2026-will-be-the-year-of-agentic-search What's happening in Information Retrieval in 2026? Agentic search! This is THE topic everyone is focused on. Agents searching for us. Agents performing deep research. Agentic models like SID-1 focused on fast search (replacing your search API?). And so on and on. I'll be hosting a conversation with Jo...

Talks this week + other events. Hope you can make it and help keep this community awesome 😎 First - Leonie Monigatti will share how Context Engineering IS Agentic Search. Tuesday, 10:30AM ET What everyone is missing: when people talk aobut "context engineering", they really ought to be improving search. Leonie will cast aside myths about context engineering to show how agents build their own context via retrieval. And THAT, not prompting magic, decides whether your AI app is successful....

Late interaction is having a moment. The team at LightOn - including superstar developer Antoine Chaffin - has demonstrated how a 150M(!) late interaction model beats much larger models - some up to 8B parameters. David beats Goliath! Better search only cost you less! Tested on what dataset? BrowseComp. BrowseComp asks difficult questions requiring detailed, complex research. Tasks you can imagine agents chugging away, searching, getting frustrated and lost. Here’s an example prompt / answer...