Site Search in an AI-First Web
A question has been doing the rounds recently: do we actually need site search?
It’s a fair question and one I have been giving a lot of thought to. With AI-powered summaries increasingly answering queries before users even reach a website, and with navigation that, when it works, can get people where they need to go, it’s reasonable to ask whether a search box is still earning its place.
I want to make the case that not only do we still need it, but that there is more we could be doing to hear what it is telling us, and that in a changing web landscape, the stakes of getting this right are higher than they might appear. This is my take on that question.
The visitors who remain are asking harder questions
AI-powered search, Google’s AI Overviews, Bing’s Copilot, and a growing ecosystem of assistants are increasingly handling the easy, surface-level queries. What are the entry requirements? Where is the main library? When does the term start? Users may well be getting their answers without ever clicking through to a website.
The people who do arrive are doing something more complex. They’re navigating nuance, completing a task, or looking for something specific that a summary couldn’t resolve. They are, almost by definition, higher-intent users, and they are maybe more likely to reach for the site search to find what they need.
This is where the opportunity lives. But there’s a subtler risk worth naming first.
We don’t control what AI says about us, but we control what happens when someone arrives
When a user asks “how much does it cost to study at the University of Edinburgh?” in Google or Bing, the AI summary doesn’t necessarily draw from our content. It synthesises from whatever it finds, and that might include a comparison page from a competitor institution that frames us as the expensive option, or an aggregator working from outdated figures. The user absorbs that framing before they’ve visited us at all.
This matters because users arrive with expectations already shaped. If our site then delivers a confusing, hard-to-navigate experience that doesn’t quickly surface authoritative answers to the questions they came with, we’ve failed twice: once in the AI layer we don’t control, and once on our own platform where we do control the content.
A strong on-site search experience is part of the answer. When users can quickly find accurate, up-to-date information on our platform, in our voice, with our context, we’re not just serving them better. We’re giving them a reason to trust our content over whatever summary brought them here. That matters especially for high-stakes queries around fees, entry requirements, and outcomes, where a third-party framing in an AI summary could genuinely influence a decision.
But here’s the question I keep coming back to: how would we know whether we’re actually delivering that? How confident are we that when someone arrives and searches for fee information, or scholarship options, or how to apply, they’re getting a result that reflects our best, most accurate content and not something buried, outdated, or missing entirely?
That’s where site search starts to feel like something more than a navigation tool.
Site search as a content performance monitor
A search tool you control gives you a feedback loop that no external analytics can replicate. Search logs tell you what people came looking for and couldn’t find through your navigation or from an AI summary. That’s not just useful data, it’s a content audit running continuously, written by your users.
Queries with no good results point towards content gaps. Repeated searches for the same thing might signal a labelling or findability problem. High search volume on a topic you thought was well-covered could mean the content exists but isn’t structured in a way that surfaces it.
Unlike external analytics, which tells you what happened, site search logs can tell you why: what someone was trying to do when they gave up, clicked away, or drilled deeper.
Interrogating both sides of the search
The real power, as I see it, comes from owning the full picture: what goes in, and what comes out.
On the input side, you have user queries, unfiltered, unsanitised, and often surprisingly candid about what your content is missing or getting wrong.
On the output side, you have the results your search returns: which content is being surfaced, how confidently, and whether it’s actually relevant. A query returning weak or irrelevant results is a signal. A query returning nothing is a louder one.
When you can interrogate both ends of that pipeline, you can start to close the loop. You can identify underperforming content before a user gives up on it. You can spot where your taxonomy doesn’t match how people actually talk about things. You can track whether content improvements change what gets returned for a given query. And critically, you can start to test whether your most important content is performing as you’d hope, including the answers to the questions AI is already being asked about you.
This feels like a feedback mechanism that no external tool can give you, grounded in what your users searched for, on your platform, against your content.
There is an argument that site search itself could go further, using ELM to surface AI-generated summaries grounded in our own content, rather than leaving that layer entirely to Google and Bing. But that is a conversation for another post.
Content quality sits at the foundation
Site search can surface problems, but fixing them is a separate conversation, one about content ownership, editorial process, and where responsibility sits. What site search data can change is the evidence base for that conversation. Instead of relying on assumptions about what content is needed, or waiting for user feedback to trickle in, there’s a continuous signal available.
Good titles, clear headings, accurate metadata, and well-structured content aren’t just best practices. They’re what make that signal readable. The better the content is structured, the more faithfully a search tool can reflect what’s actually there, and the more useful its logs become as a diagnostic. It also stands to reason that when AI systems draw on that content, they’re drawing on something accurate and well-framed, rather than leaving the field open to whoever has structured their content better.
Rethinking what success looks like
If AI is handling the top of the funnel, raw session volumes seem to me to be an increasingly unreliable measure of whether a web presence is doing its job. Site search offers a different kind of evidence: did people find what they were looking for? What were they looking for that wasn’t there? Where did the content let them down?
These feel closer to the questions that actually matter, and a well-instrumented site search can start to surface them.
Back to the question
So, do we need site search?
My view is yes, but perhaps not only for the reason you might expect. It’s one of the few tools we control that can tell us, in our users’ own words, what our content is and isn’t doing. When users arrive, already primed by AI summaries we had no hand in, it’s often the fastest route to the authoritative answer we’d want them to find. And without it, we’re largely guessing whether our most important content is performing as we’d intend.
In a web landscape where external signals are becoming less reliable, that feedback loop seems more valuable, not less.
The question isn’t whether we need it. It’s whether we’re actually listening to what it’s telling us.