Estimating a search feature

The story where 'add a search bar' turns out to be 'pick a search engine, build a relevance model, and own the index forever'.

Search has two surface areas the ticket never mentions. The first is relevance: what makes a result better than another, and who decides? The second is operational: where does the index live, how does it stay in sync with the source of truth, and what happens when it falls behind? "Add search" implies a UI control. The work is the ranking and the index.

A team that's shipped search before knows the answer is rarely the same twice. Postgres full-text is fine until someone wants typo tolerance. Elasticsearch is fast until you're paying for it on weekends. Algolia is easy until you need a custom ranking. The estimate depends on which trade-off the team is taking, and that decision usually hasn't been made when the ticket lands in refinement.

What gets said in the room

Backend: "Postgres can do this with tsvector."

PM: "Should typos still match? Plurals?"

Frontend: "Are we doing autocomplete, or just submit-then-results?"

SRE: "How are we keeping the index in sync? Triggers? Job? Stream?"

Lead: "Who owns relevance when someone complains the right answer is on page two?"

Questions worth asking before voting

  • What corpus — how many documents, how often do they change?
  • Postgres FTS, dedicated search service, or hosted (Algolia, Typesense)?
  • Typo tolerance, stemming, synonyms, plurals — which are in scope?
  • Faceting and filters, or just a single ranked list?
  • How does the index stay in sync — and what's the staleness budget?
  • Who owns relevance long-term, and how do they tune it?

See payment integrations for the same shape — a small public surface hiding a long operational tail. Or open a session when the relevance question has an owner.