Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

How do students respond to AI-powered search, and how does it compare to Google? More Drupal AI UX experiments

In January, the UX team spent a day ideating on how to apply Drupal AI to University web content management and built a prototype including an AI-powered search feature. But would students find it useful and usable? Preliminary UX research revealed useful insights.

Search is an aspect of web content management that is notoriously difficult to get right. A useful way to consider what search needs to support is a collection of four information- seeking behaviours described by information architect and interaction designer Donna Spencer:

  • Known item seeking
  • Exploratory seeking
  • Don’t know what they need to know seeking
  • Re-finding seeking

Read more in the source article: Four modes of Seeking Information and How to Design for Them by Donna Spencer, Boxes and Arrows blog.

Breaking search down into its simplest components, it comprises:

  • a way of entering search queries
  • a way to index content available to answer search queries
  • a way to compare search queries with indexed content to find similarities or matches
  • a way to deliver relevant responses to the search queries entered.

Advances in AI and the prevalence and availability of large language models (LLMs) have highlighted the potential to apply Generative AI to search particularly in the realm of conversational search (LLMs can be used to facilitate colloquial interactions, interpreting and responding to conversational-style queries) and relating to semantic search (Gen AI can generate vectors from models for use in indexing and querying – therefore supporting searches based on meaning not keywords).

In a day of AI exploration at the start of the year, the UX and web development teams worked with Drupal AI experts Freely Give to experiment with AI-powered search on the University website. One of the outputs was a prototype AI-powered search chatbot (for brevity: an AI searchbot) installed on a test site including demonstration content from the School of History Classics and Archaeology (HCA).

Read more about the AI experimentation in my earlier blog post:

Can AI help or hinder search? Trials with Drupal-AI boosted search and AI Assistants

Doing our own experiments with the AI searchbot helped us familiarise with what AI-powered search could do, how it worked, its limitations and ways we could influence its success using configuration and pre-prompting mechanisms.

This type of experimentation was limited when it came to learning about whether the AI searchbot was useful and usable for end-users, however, since it involved those building the searchbot also appraising it – and therefore was missing an objective viewpoint. To counter this bias and learn more about the usefulness and usability of the AI searchbot, I completed a short round of UX research where I asked student participants to use it to find information from the demonstration HCA site.

My overarching goal for this research was to:

  • Gain initial insight into how students perceive AI-powered search
  • Find out their expectations for an AI searchbot on a University website and learn how they rate its success
  • Understand how they feel it compares to other search mechanisms they used.

I felt that gathering this information would be helpful in ensuring we plan to make responsible use of AI, and in particular, make evidence-informed decisions about any adoption or development of AI within search.

I designed a simple test drawing on previous AI UX research

My test set-up was reasonably straightforward: To ask participants to search for pieces of information using the AI searchbot on the HCA demonstration site, and then to ask them to look for the same information using the search mechanism they would usually use. In both types of search I would encourage participants to think aloud, and ask them to point out differences between the search results they obtained via each method.

I used data about top tasks and recent search queries to define what participants would search for in a scenario

Since the AI searchbot was installed on the HCA demonstration site, the areas participants would search for needed to be relevant to a School site such as HCA. I consulted data from searches carried out using University search as well as the results of a previous University-wide top tasks survey (conducted to identify which University web content was of highest priority to University audiences). From a consolidated list of search tasks, I selected four relevant tasks which I used to shape a test scenario as follows:

Imagine you are a school-leaver interested in studying for an undergraduate degree in Archaeology. You’re looking for some information. How would you firstly use the AI searchbot and secondly your chosen search method to find information on the following topics:

  1. Archaeology degrees available at the University of Edinburgh
  2. Cost of studying an Archaeology degree identified to be of interest
  3. Help to cover the cost of studying the degree
  4. Options for working alongside study as a way to cover the cost.

I defined some success indicators to assess how well the AI performed

Having previously conducted research on how people use Generative AI in a chatbot, I had learned that given the non-deterministic nature of this technology, it was likely that no two tests would give the same result, meaning that trying to define and adopt an ‘ideal search response’ as a testing benchmark was futile. Instead, drawing from conversational search design, I defined success indicators ahead of the test, to help me gauge how well the AI technology supported the students to complete the searching tasks.

I defined the following success indicators:

  1. Participant has a positive response to being presented with the AI searchbot
  2. Participant can communicate with the AI searchbot using language they would naturally use
  3. AI searchbot responds to participants’ queries with relevant answers (no hallucinations)
  4. Participant reflects on a positive experience

I set out to observe the actions of five participants for the initial insight I wanted

Anticipating a high degree of variability in participant responses to the test, I decided to recruit a small sample of five participants, as I felt that this would provide sufficient data for the preliminary insights I wanted.

In particular, I was keen to examine:

  • How participants interacted with the AI searchbot for each of the search tasks
  • The responses it gave
  • The relevance of the responses in light of the tasks and scenario
  • Differences in the way participants interacted with their preferred search method
  • Ways the presentation of the search results and responses differed
  • How participants rated the different search results and responses

Taken together I felt this data would provide a preliminary steer on whether the AI searchbot had potential to be useful and usable, and the conditions and factors that needed to be in place to support this.

I assessed usefulness and usability by analysing experiences task-by-task and overall

As hoped, the testing generated a wealth of insightful research data, in the form of the interactions within the AI searchbot and with Google (which was the search mechanism preferred by all student participants) as well as comments from the students as they spoke aloud during the tests.

Observations from each task revealed variable expectations and responses

Considering each task separately, I was able to ascertain what each student expected from the AI searchbot in the context of each task, if they would use it or not, and to establish if the AI met their expectations and if not, where it was lacking.

Task 1 – Finding archaeology degrees – participants used similar prompts, got varied AI responses and judged them differently

All five student participants typed a prompt along the lines of ‘what degrees are available’ (none specified archaeology degrees in their prompt), and received similar responses from the AI searchbot in the form of structured or bulleted text with a selection of the degrees available and, among other links, a link to the  HCA page of archaeology degrees. Visiting this link, all five felt this answered their query to an extent, however, depending on their prior knowledge they had other expectations. When repeating the task in Google, students didn’t frame their search as a question, instead entering a collection of keywords or a phrase such as ‘archaeology degrees Edinburgh University’.

Some participants expected the AI responses to include a full list of degrees

Picking up on the fact that the searchbot response only contained a selection of all the archaeology degrees that were available, participants commented that the full list (including combined programmes) would have been helpful to have had upfront, (rather than a ‘hand-picked amount’ in the words of one participant).

Some were confused by responses including MA degrees which they thought were for postgraduates

Analysing responses which contained degrees like MA Ancient Mediterranean Civilisations, MA Archaeology and MA Archaeology and Ancient History – some participants questioned if the AI searchbot had provided them with the correct information, since they were unfamiliar with these programmes being for undergraduates – they thought they were Masters programmes for postgraduates.

The AI searchbot responses surfaced useful links which it presented in different ways

Queries posed by the students were similar and all the AI responses contained links to programme pages that the students expected. Although they all contained links, these were presented differently by the AI (as to be expected from a non-deterministic technology). In some cases the AI presented links from the HCA site which went to individual programme pages for entry in 2025 (outside of the HCA site), in others it provided hyperlinked pages that listed degrees on a particular subject – for example Archaeology degrees. The links were also styled differently, with some indicative of the destination, others saying ‘here’.

Screenshot showing different outputs from the AI searchbot, differing in style

Screenshot showing different outputs from the AI searchbot, differing in style

Google responses clarified MA degrees were for undergraduates and presented linked pages more clearly

In the AI searchbot, participants were drawn to links to the individual programme pages since they deemed these as verified sources of information. Given the limited space in the chatbot window these were displayed as hyperlinks which did not always make it easy for participants to establish the link destination. By contrast, Google (and the Google AI summary used by most participants) displayed each result with either a link or an icon to link out to the page which showed a page summary. Participants commented that they found this display easier to scan to select a link of interest.

Task 2 – Finding the cost of a degree of interest – most did not use the AI, going direct to the degree pages previously found

Since they were all current students, all the participants had had prior experience of calculating the fees for their programme of study, and they recognised that the total cost figure would have dependencies on factors like their country of residence and fee status). This experience impacted what they expected from the AI searchbot, and based on what they knew, most participants felt that going direct to the webpage for their degree of interest was the best way to find the information required by the task.

Participants that did use the AI expected it to highlight factors affecting fees or to ask clarifying questions

A minority opted to ask the AI searchbot for fee information for specified degree programmes and checked to see that its responses took into account factors impacting degree costs. The AI searchbot responses typically did mention these factors, with links to appropriate fee information pages to find out more.

Screenshot showing differing styles of AI responses to searches for fee information

Screenshot showing differing styles of AI responses to searches for fee information

Google responses surfaced sample figures for tuition fees

The main difference between the AI searchbot responses and the results provided in response to Google searches was that the Google results included figures which gave an indication of the fees, with the dependency factors spread out on separate lines with links to the information sources. Participants said they found this layout helpful to scan for an idea, but that they would still go to the appropriate degree programme pages to make sure they had the right information.

Screenshot of two Google responses showing sample fees and breaking down how fees were calculated

Screenshot of two Google responses showing sample fees and breaking down how fees were calculated

Task 3 – Finding help to cover degree costs – most expected information from internal and external sources

All participants used the AI searchbot for this task, wording their query slightly differently but all along the same lines. All received some information that they felt was of some use but not as much use as the results from Google.

Information provided by AI responses was correct but limited

Although the information provided by the AI searchbot was acceptable in terms of the questions posed by the participants, they felt the answers were lacking in detail and felt like ‘a canned response’ – in the words of one participant. The AI responses included very general information in short sentences and the only the link provided to the Scholarships and Student Funding website.

Screenshot showing the AI searchbot response to task 3 relating to financial assistance available at the University

Screenshot showing the AI searchbot response to task 3 relating to financial assistance available at the University

Google provided useful external links and specific scholarships

Completing the same search task in Google, participants received the same link to the Scholarships and Student Funding website, but also links to sources outside the University which were useful in the context of the task, for example links to external funding providers like the Student Awards Agency Scotland. Furthermore, Google also surfaced direct links to individual scholarship awards offered by HCA.

Screenshot showing a typical response to task 3 following a Google search showing internal as well as external sources of information

Screenshot showing a typical response to task 3 following a Google search showing internal as well as external sources of information about financial assistance

Neither the AI searchbot nor Google provided information specific to undergraduates

Although the information provided by both search options was acceptable in terms of the questions posed by the participants, it was not all relevant to the school-leaver scenario – instead, sources like the Graduate Discount Scheme and other alumni-based offers were also included which would have been irrelevant to an undergraduate in the scenario for the test.

Task 4 – Finding out about working alongside study – participants sought clear rules

All participants used the AI searchbot for this task. Whereas in other tasks participants had appreciated links they could follow to get to the source of information, for this task, most said they would value having specific guidance surfaced from the different sources, pulled together and presented in answer to their query – for example the number of hours they were permitted to work alongside their studies, and what the restrictions were for working on a student visa.

AI searchbot responses included links to the University’s Careers Service

As in the previous tasks, the searchbot produced conversational responses in the form of structured options with links to relevant sources which it had found in the HCA site content, the main one being the University Careers Service, another being Employ.ed on Campus also part of the Careers Service. As well as the links, responses included details of working hours (including the rule of no more than 15 hours per week) and a point to check visa regulations when considering working along studies, which participants felt was helpful.

Screenshot showing AI searchbot responses to task 4 about working alongside studying at the University

Screenshot showing AI searchbot responses to task 4 about working alongside studying at the University

Google pulled out similar links and points of guidance

When participants used Google to complete this task, the results were comparable to the responses provided by the AI chatbot. Google AI summary pulled out and presented the 15-hours-per-week rule and included links to the Careers Service and Registry Services (for the sources of information relating to guidance on working on a student visa).

Screenshot showing a typical Google response to task 4 about working alongside studying at the University

Screenshot showing a typical Google response to task 4 about working alongside studying at the University

Reviewing the tests as a whole I drew several inferences and conclusions

It was helpful to zoom out of individual tasks and reflect on patterns that had emerged from watching the participants search using the AI searchbot and Google, to critique whether it was usable and useful and to provide food-for-thought about how it could be developed to be bettered.

The AI searchbot was usable – it performed well against the success indicators

When introduced to the AI searchbot, each of the five student participants was agreeable to using it, and apart from several choosing not to use it for the task relating to tuition fees, they successfully used it for each of the tasks. Of all the interactions that took place between the participant and the searchbot, only once did the AI searchbot return a null response (phrased as ‘I’m sorry I don’t know that, would you like to ask something else?’), meaning that for almost all of the time, the AI searchbot provided information of relevance in the response which was understandable to the participant.

To be useful, the AI searchbot responses needed to contain links verifying sources

When they reviewed the content of the AI searchbot responses, participants were most drawn to the links rather than the conversational text, as they valued accessing the sources of information provided. This aligned with the way several students described using Google, to ‘parachute’ (in the words of one student) them into a relevant section of the University website so that they could find content they were looking for. This suggested that a potentially useful function of an AI searchbot could be to surface links of useful information from within the University website, as an alternative to the ‘outside-in’ approach, starting from Google in a separate browser.

As expected, the AI searchbot wasn’t a substitute for Google but could complement it

Participants liked Google over the AI searchbot on several fronts:

  • presentation of data and information (both in the form of links and surfacing content from sources in its AI summaries)
  • its ability to provide a greater range of search results (including non-University sources).

Taken together, however, both had relative strengths as a search mechanism, with the AI searchbot performing better when limited information was needed, and Google supplementing when more information was required. This observation was evidenced by two participants taking data from the AI searchbot and pasting this into Google to build a search trail rather than taking conversation turns with the searchbot itself.

With goals of improved usefulness and usability in mind, I identified several ideas to evolve the searchbot

Noting some of the drawbacks of the AI searchbot in this test, applying my understanding of how it was built and understanding the potential using the open-source Drupal AI framework, I highlighted some areas for development, as well as some ideas for further testing, to further investigate the potential for AI-powered search.

Accessibility testing the searchbot would establish ways to improve its usability

From observing participants interacting with the searchbot in the tests, several areas to improve how easily they could use it emerged. When handling queries with a relatively large volume of data in response, the searchbot did not present information in an easily scannable way due to the relatively small area of the screen it occupied. One way this could be countered would be to enable users to increase the size of its user interface, to avoid them having to scroll to read the text. Adhering to consistent formatting for content like links, bullet points and numbering could also potentially improve how easily people could use the information provided by the searchbot. Before making such adjustments, however, it would be necessary to carry out accessibility testing on the searchbot interaction experience to ensure any changes were made had holistic benefit to all users.

Adjusting the AI searchbot’s configuration could improve consistency of its responses

Comparing responses to very similar queries in the experiment, the AI searchbot provided different responses with varying content. For some searches, such as searches which would be likely to start vaguely and through progressive stages, get narrower and more precise, the AI searchbot conversational interface could work to begin a trail. If, however, if it was important for the same response to be received to a range of queries (for example a comprehensive list of degrees on a particular subject) the AI could not be relied on to provide this information in its current configuration. To improve this, (but unlikely to guarantee it) experimentation could occur by various means, for example: using back-end prompting to confine results and set rules, adding specific agents to handle specific queries and lock-down responses, or making adjustments to control the pool of information the responses were built from (known as a type of Retrieval Augmented Generation or RAG).

Embedding contextual knowledge through the searchbot could facilitate a more University-centric enterprise search

This round of experimentation focused on search and retrieval from a single School site whereas successful search and retrieval from the University web estate requires search solutions at enterprise level. Testing results showed that students would be unlikely to swap Google for AI-powered search in the form tested to help them with common search queries (such as those appearing in the top tasks list). That said, taking this research data along with knowledge of how AI-powered search works, there is potential to use AI-based mechanisms like pre-prompting and agentic frameworks to fine-tune AI-powered search to meet nuanced user needs and searching scenarios in a way that is not possible with traditional lexical approaches to search and that could present a more University-centric search offering than Google currently provides.

With reference to the four modes of information seeking highlighted by Donna Spencer, applying Gen AI to support specific audience and scenario-focused searches could improve users’ experiences of ‘Exploratory seeking’ and ‘Don’t know what they need to know seeking’ given its capacity to work using semantic meaning and without keywords. In other words, rather than replacing Google, AI-powered search may have a role in helping users familiarise with the nuances of the University to support queries they have within different realms of the University ecosystem.

A good strategy for developing an effective ‘Google for my sector’ is to focus on solving actual user problems, not trying to ‘boil the ocean’ – Charlie Hull, expert in enterprise search and AI from his post: Don’t look for one ring to rule them all in enterprise search

Adopting this UX-focused concept relies on selective application of University-knowledge gathered from both UX and broader research, and with this wider goal in mind, several next steps and areas for investigation are proposed, all of which centre around the idea of using the AI searchbot as a container for University-specific contextual knowledge, making use of institutional sources not readily available to Google, and using this information to tailor appropriate responses.

The searchbot could be tailored to help specific audiences if provided with contextual persona data

When it was set up the AI searchbot on the HCA demonstration site contained a basic prompt advising it to prepare its responses with a prospective student persona in mind. Given the results of the tests, and considering the responses on the chosen tasks in particular, there are several ways the pre-prompt could be augmented with persona-specific information to help it provide answers that are more insightful for the target audience.

At one level, more detailed knowledge about the needs of specific University audiences and relevant contexts gathered from previous user research could be fed to the AI in a machine-readable format for it to refer to and apply when processing queries and formulating responses. Taking this further, an agentic approach could be implemented to portion out specific aspects of persona information to be used in specific ways by defined agents, as part of a triaged orchestration to achieve a more tailored and responsive type of search handling. For example, an agent tuned to pick up undergraduate queries could align with specialist aligned to support on specific aspects of the undergraduate user journey from programme selection, eligibility assessment, financial appraisal and so on, with an agent tailored to writing undergraduate-appropriate content (for example, ensuring responses contained links and were written plainly) in place to close the chain and deliver the response to the enquirer.

Data mapping could be used to enable the searchbot to act as a subject matter expert, consolidating information from varied sources

In several of the search queries included in the test scenario, providing the kind of answer the students expected depended on gathering and presenting information from multiple sources from the University web estate. Using internal knowledge of common search queries as a guide (drawing on top tasks data and also data from recent search queries), mapping and consolidation of information sources relevant to each query could be carried out, consulting with appropriate subject matter expert University staff members to establish the most useful banks of answers. The outputs of this mapping exercise could then be provided to the AI searchbot to call upon (either through agents or prompts).

If this worked as expected, an AI would then be equipped to help users with responses to specific questions, to surface relevant pages in the web estate to avoid them relying on finding their way using the web navigation scheme or having to carry out repeated Google searches to find links to pages that could be relevant. Completing the data mapping exercise for a handful of common queries would be a useful initial exercise to establish whether such an approach was viable and worth the effort.

The searchbot could translate University-jargon into plain language

Recognising the ongoing potential of University-specific terminology to confuse those unfamiliar with all the nuances of the institution (in this round of testing evidenced by the misunderstanding of whether MA was an undergraduate or postgraduate programme), the AI searchbot could play a role in ‘jargon busting’ language used on University websites. Several knowledge items exist that could be supplied to the AI in machine readable format for it to use to decode University-speak into clearer terms, for example:

If fed data from these sources and prompted accordingly, the AI searchbot could potentially help people understand and learn more about the University without them having to look this information up elsewhere, providing an experience to open out the University to those on the outside to counter audiences feeling on the outside of the institution looking in.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel