Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Can AI help or hinder search? Trials with Drupal AI-boosted search and AI Assistants 

During a day spent working with Drupal AI specialists, we explored different ways to apply AI to enhance search and retrieval of content from University websites.  

AI and LLMs have facilitated new approaches to web search 

Web search has traditionally operated using standard database indexing methods, matching keywords (lexical search). AI advancements, especially in relation to large language models (LLMs) have paved the way for new search capabilities, to potentially address shortfalls of lexical search and improve the quality of search results presented in response to user queries.  

Semantic search (where search terms are interpreted, not just matched) can be made possible by incorporating LLMs to analyse and make sense of queries written in users’ natural language, and on the retrieval side, by making use of databases of vectorised content. This enables similarity-ranking and associated return of search results which respond to queries conceptually, (without relying on specific keyword-matches).  

Drupal AI developments have resulted in innovative search approaches  

In the content management system realm, contributors to the open-source system Drupal have developed several AI-powered mechanisms to search and retrieve content, including incorporation of vector databases as well as use of AI Assistants and Agents. At the University, we use Drupal to power EdWeb2 and we wanted to find out more about these new technologies to understand if they had potential for improving search of our websites. We collaborated with Jamie Abrahams and colleagues from Freely Give, a Drupal AI specialist agency, for a day to learn about the options available and try them out in a series of experiments.  

Read more about the format of our AI day in my related blog post:  

Bringing AI innovation to University web content management – a day of Drupal AI exploration

An automated Editorial Style Guide? Experimenting with Drupal AI Automators 

Comparing traditional search to AI search 

We recognised the possible benefits AI-enhanced search could have, however, it was important to carry out experimentation in a controlled way to test our assumptions. In particular, it was necessary to compare the outputs and functionality of AI-enhanced search to traditional search, and to try out the AI out in the specific context of the University website to learn about how it worked and to appreciate factors affecting its effectiveness, in order to make a reasoned assessment of its potential. We chose to look at three different sorts of search:  

  1. Normal search (traditional database search)  
  1. Boosted search (traditional database search with the addition of semantic search using a vector database)  
  1. AI Assistant search (appearing as a chatbot to receive and answer search queries in natural language)  

Adopting a case-study School site and a set of associated searches  

Our web development team created a case-study School site with representative content to experiment with, and the team at Freely Give prepared three versions of this demo site, each with a different type of search enabled. Thinking about the likely visitors to School site (for example: prospective undergraduate students, prospective staff, potential funders, industry partners) and their associated information needs and contexts, we prepared a list of typical searches these different people may wish to perform on the site. We considered the kinds of questions they may look to find answers for, and the sorts of words they may use to articulate these questions. We planned to pose these questions to all three versions of the case-study site and look at the results so we could compare the relative performance of each type of search.  

Looking at responses to an open query about history degree options  

We began with a query typical of a prospective student: ‘I want to find out what undergraduate history courses are available?’ which we entered into all three versions of the demo site. The preferred answer or search result was a defined page of the history degrees available to undergraduate students. The responses from each version of the demo site were as follows: 

  • Normal search – returned results including a link to the page with a full list of all degrees (postgraduate and undergraduate, not only History) available at the School  
  • AI-boosted search – returned results including a series of links each one for a different individual taught History course, as well as the link to the page with a full list of degrees available at the School 
  • AI Assistant – returned a formulated answer containing links to some individual history courses (both postgraduate and undergraduate) and a link to a course in the Degree Regulations and Programmes of Study (DRPS) site. 

It was interesting to see that none of the three searches had returned the desired search result. To understand why we needed to unpick the underlying mechanisms and do some more experimentation.  

Investigating boosted search by considering how the vector database was working 

The results from the AI-boosted search contained pages about individual courses and degrees, which suggested that the vector database deemed courses and degrees to be similar concepts. To test this theory we changed the search query to include the word ‘degrees’ instead of courses – so the query was: ‘I want to find out what undergraduate history degrees are available?’. We then compared the results from the normal search vs the AI-boosted search:  

  • Normal search – returned the same results as before – including a link to the page with a full list of all degrees (postgraduate and undergraduate, not only History) available at the School  
  • AI-boosted search – returned different results with the top result being the desired destination page with the list of undergraduate History degrees  

This experiment showed that the AI-boosted search performed better when the search term was more precise, but it also illustrated the difficulty in AI being able to distinguish between degrees and courses at the University of Edinburgh in order to provide the preferred search result. To make the AI-boosted search respond better to the query using the word ‘courses’ in this instance would have required making changes to the underlying vector database where the results were coming from which was not a quick fix, so we turned our investigation to the AI Assistant search results.  

Tweaking the AI Assistant prompt to see how the results changed    

Reviewing the chatbot answer, the links provided to individual courses in the DRPS site suggested that, as the AI-boosted search, the AI Assistant had also not interpreted the search for ‘courses’ as looking for degrees, and instead, had regarded courses and degrees as two separate concepts. From a discussion in our group, we identified the potential difficulty in an AI returning a satisfactory answer to such an open query about history courses, without knowing the context, and whether the enquirer wanted results about degrees, programmes, or specific courses. We also identified that it would be helpful if, as well as providing the information, the chatbot suggested a task or call-to-action for the enquirer, to be able to use the information supplied.  

Unlike for the AI-boosted search, it was possible to make quick tweaks to the AI Assistant configuration to change how the chatbot handled search queries and to see if this resulted in the desired improvements in the search response. For example, the Assistant prompt could be adjusted to ensure either it asked upfront about the searcher’s needs and circumstances and modified the response accordingly, or it took these factors into account drawing inference from previous chat interactions and using this to shape its answers, to direct the searcher to likely next steps or tasks they might like to complete.  

The original Assistant prompt included the following extract:  

You are a chatbot that can answer questions about The University of Edinburgh. 

Do not have opinions about things that the user asks for, just give back information in a friendly manner. This includes questions about the University. 

Always look up from the database what the user is asking, even if it seems absurd and only base your answers on what they write.  

Do not use your own knowledge to provider information about The University of Edinburgh, just the context you are given. 

You only ever answer questions based on context from searches you were given, you do not make up your own conclusions. If the context that you are given does not answer the user, just answer with “I am sorry, I do not know that, would you like to ask something else?”. 

To try to improve the quality of the search results to the given query, and in particular to encourage a response that was helpful and instructive for someone visiting the case-study School site without prior knowledge of University terms (for example the way the words ‘courses’ and ‘degrees’ were used and their specific contextual meanings), the AI Assistant prompt was adjusted to add in the following instructions:  

“Think about the persona of the user based on the conversation. If you don’t already have a persona for the user, assume they are a prospective undergraduate student. Example personas are a prospective student, a current student, a faculty member, a potential funder”.  

“Always finish with a suggested next step for the user to take on their journey”. 

With this adjustment to the prompt made, the chatbot history was cleared and we presented the chatbot with the original query: ‘I want to find out what undergraduate history courses are available?’, the response was more appropriate for the persona of the enquirer. It included a formatted list of the titles of available undergraduate history degrees, (not just courses), a link to the desired destination undergraduate history degrees page and suggestions for the enquirer to review the listed degree programmes, explore the entry requirements, or consider going to an open day (with a link to a page about open days). 

Screenshot showing the case-study School demo site with the chatbot showing an improved structured response

Screenshot of the demo site showing the chatbot and the formatted output

Examining responses to a more precise query: about reading room opening times  

In contrast to the first query, our second query was less open, seeking a specific piece of information within a page on the case-study School site. Details about the opening hours and use of a specific library (the Classics Student Reading Room (SRR)) were included in an accordion on one of the pages of the demo site, and we wanted to test whether the AI-enhanced search could find it. For this search, we decided to focus on the chatbot, as it offered the opportunity to learn most quickly (since we could perform a search, review the response, make tweaks to the AI prompt and then repeat the search to see the resulting changes to the response straight away). 

We began with the query ‘When are the student research rooms available to me?’. The chatbot returned an answer with the correct opening times, details of where the rooms were located and a link to the student research rooms page.  

Providing the AI instructions for interpreting and using University-specific terms  

Keen to build on what we had learned with the first search experiment, we posed a couple of less-specific questions to the chatbot: ‘When is the library open?’ and ‘Where would I go to borrow a book?’. For these queries, as expected, the chatbot returned variable results, suggesting that, again, without being given contextual knowledge about University-specific terms, (in this case information that the Student Reading Room or SRR was a type of library where students could borrow books) it struggled to answer search queries with appropriate answers. An adjustment of the AI Assistant prompt to include the extract below, followed by a repeat of the search returned better results.  

Our site uses Edinburgh specific jargon. When a user asks for something without knowing our terms can you add the University jargon to their query before doing the search. 

Library – Student Research Room 

Library – SRR 

Library – Study Space 

For example if someone says “When is the Library open” 

Instead search for the query “When is the Library Open – Student Research Room” 

When someone uses terminology that they needed specific Edinburgh jargon to find things, please explain to the user what terms the University prefers to use” 

Using an Automator to create indexing data to improve information retrieval 

Adjusting and engineering prompts of AI Assistants in the context of on-page chatbots was one way to improve precision of AI search, but Jamie introduced us to another way to use AI to help improve search – namely drawing on AI’s capabilities for extracting meaning from data by indexing it.  

AI Automators could be instructed to review content on a page (for example, from the University of Edinburgh website) and index it to come up ways people might want to search this content, or to compile a list of tags or synonyms that would be relevant to the content being presented. This AI-generated indexing data could then be added as supplementary metadata into vector databases and/or AI Assistant prompts, to potentially improve the precision of information retrieval for the content on that page when a relevant search query was received.  

We tried this idea out by building an Automator to index the destination page from the previous search experiment (with the information about the Classics Student Reading Rooms). The prompt for the Automator read as follows:  

“You are a helpful assistant and your job is to help people find what they need by taking the provided content and summarising it, thinking about the ways people would want to search it to find information.  

When you look at the content on this page think about terms that might be unique and odd to the University of Edinburgh and try to think of more generic terms people search for when coming to this sort of page. Give me some tags but also some written options such as people might want to find opening times.” 

When given the Facilities and Communities page mentioned previously, the Automator made a reasonably accurate index of the content on the page, including the following excerpts: 

###Facilities  

**Student Research Rooms (SRR)** Available for study and consultation, these rooms are open to students and staff from the School of History, Classics and Archaeology from 08:00 to 17:00 on weekdays 

**Library Resources ** Multiple book collections are available, with options to borrow and consult. Assistance with library materials is provided by the Academic Support Librarian. 

### Tags  

-Classics Department  

-Student Research Rooms  

**Additional information**  

People might want to find library access details 

Details about document digitisation and research access  

It was clear to see that this information could help retrieval of relevant information when added in to the vector database as an extra source of metadata, along with the content from the page itself. 

Conclusions, reflections and next steps 

From these few, relatively small experiments we learned much about the potential for AI-enhanced search applied to the University website. 

AI can support search but its success relies on a clear view of the search experience  

Analysing the usefulness of the different results to a single search query highlighted that designing search experiences is not a one-size-fits-all concept. In the book ‘Designing the Search Experience: The Information Architecture of Discovery’ (2013) by Tony Russell-Rose and Tyler Tate, the authors categorise information seekers based on their domain expertise (how well they know the subject matter) and technical expertise (how well they know the technology powering the search mechanism). Those novice in both areas tend to adopt an orienteering approach, spending time to reformulate queries repeatedly to get a successful result, whereas those expert in both areas tend to adopt a more directed approach using precise terms to quickly pinpoint what they need. Search experiences need to cater for people adopting both approaches as well as those anywhere in between. Knowledge of the type of search activities characterising these approaches (applied to a specific context) can help ensure the application of AI search supports the processes information seekers follow, rather than hindering them by assuming they are all searching in the same way.  

Providing AI with contextual data and examples helps it provide better search results  

Considering the way AI handled queries probing for University-specific information reinforced the need to pay attention to language, not only the the language used across University websites but also to consider the language visitors may use to frame their queries, as this information could be incorporated in AI configurations to make responses better. Synonyms, glossaries and word lists could all be useful tools in AI configurations to help the AI make connections between what people are searching for and content from the University website that could be relevant and useful to them. 

Thinking about the way AI makes sense of content offers a new way to critique it 

As we worked through the experiments, we learned that the more clearly defined an AI prompt or instruction was, the better it could perform its task. When it came to adding an AI to support search, if it was possible to give the AI precise instructions on how to handle queries and what content might be relevant to certain queries and not others, the search could be made better. Supplying this data, however, and supplying this in a way the AI could make sufficient sense to act upon depended on the nature and quality of the website content itself. Using the AI Automator to index pages it was clear to recognise which ones were written with audience needs and which ones were not. The indexing results resembled the kind of AI search results from Gemini or Copilot, reinforcing the need to write content with needs in mind – as it was very plausible that many people searching Google or Bing for information from the University website could take the AI-provided information as a response to their question without even visiting the source webpages. 

AI Assistants offer the opportunity to support search in customised ways  

Thinking about supporting search with an AI reminded me of one of my previous jobs handling enquiries in a library. Enquiries came in various forms, often there was not enough data to respond upfront, prompting the need to ask for more detail or clarification. Queries for similar pieces of information prompted me and my colleagues to write templated responses to quickly send out. Certain specialist queries were referred to colleagues or departments with relevant experience. It occurred that training AI Assistants and Agents to handle search followed a similar process. With iterative configuration AI Assistants could ask clarifying questions, and also selected AI Agents could be configured to be expert at handling specific queries – for example:  

  • A new student Agent: familiar with all the stages a new student needs to complete when they join the University, ready to provide answers to typical areas they get stuck with  
  • A PhD supervision enquiry Agent: very good at processing staff profile information to present possible supervisors in a given research area  
  • A careers service Agent: able to take find out what a job-seeker needs and triage them to applicable services.  

Designing good search experiences with AI needs research and a UX approach  

As I learned researching the UX of AI Assistants in the Drupal editorial interface, designing a useful and usable AI-enhanced of any type requires a UX approach, to surface needs through research, and test features and concepts for feedback and iteration. Based on my recent experiences researching AI, the speed which with the technology can be iterated lends itself very well to the application of UX design as learnings from one round of testing can be applied to guide the next, fostering a process of continuous learning and improvement. I am excited to continue to experiment with AI and search, and to investigate more ways AI can be applied to resolving problems faced by users of University digital systems, services and products.  

Read more about the University’s  experimentation with Drupal AI functionality in my related blog post:  

Making AI useful and usable – consolidated learnings from UX research of Drupal AI Assistants

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel