Testing navigation concepts in the Web Publishing Platform
As part of the new Web Publishing Platform project, we needed a way to test ideas for new navigational approaches, to get feedback to inform iterative design. We used testing platform Maze to run remote, unmoderated tests on navigational prototypes built using design tool Figma, to gain quick insights from large numbers of users.
Navigation in the new platform
An information architecture that supports people more easily finding their way around to get to the content they need is key to achieving our vision of a world-class Web Publishing Platform. The new platform is not intended to be a like-for-like replacement of EdWeb – and with the project to develop a new Design System, we have an opportunity to take a fresh look at information architecture, with a view to developing improved navigational approaches rooted in a robust information architecture strategy. The University has 4 principles underpinning its information architecture strategy:
- Provide navigational consistency
- Provide navigation within the website interface
- Avoid choice overload
- Maintain clear page purpose
Embedding these principles helps ensure a navigation experience which is intuitive and straightforward – so people moving around the University web estate know where they are, where they have been and where they can go (consistent with the Nielsen Norman Group definition of a good navigation system).
Navigation discovery research
Building a good navigation system has remained a key focus of the Web Publishing Platform project from the start, with navigation defined as one of the epics on our project roadmap. To ensure we adopted and maintained a user-centred approach, we applied our UX and D process, beginning with vision and goal setting and moving through to ideation. We sketched some initial ideas of navigational approaches and elements but wanted to test these with users before developing them further.
Challenges with testing navigational structures
It can be difficult to objectively assess parts of a navigational structure (such as menus, on page user interface elements, breadcrumbs, etc) to ensure they are meeting people’ needs. People typically use a combination of elements and signifiers as part of the whole structure to find their way, so examining parts in isolation does not give a representative view of how web navigation happens in practice.
In the book ‘Designing Web Navigation’, James Kalbach acknowledges some of the challenges of testing navigation with users – citing difficulties with meeting information needs of ‘large and disparate’ target user groups and keeping up with the fact that ‘information needs change, even for a single person in the middle of a single search’.
As we designed a testing approach, it was important to bear in mind another piece of wisdom from this book:
No single evaluation will give you a complete picture of navigation success
– James Kalbach, ‘Designing Web Navigation’ (2007)
Not only will the success of a navigation system depend on the sum of its parts, it will also depend on the content used. A menu and set of user interface elements may function correctly, but if labels and link wording are unclear, a person won’t be able to get to where they want to be, and the navigation system will have failed.
We wanted to be able to experiment with testing our ideas – in particular to make our sketched ideas interactive – so we could see how people would use our concepts and structures to navigate their way around a site. With this information, we could make judgements about our structures, and decide how to iterate on them.
Building navigation prototypes in Figma
The design tool Figma was already in use by the Design System project so it was an obvious choice of means to turn our static sketches into interactive prototypes. Using Figma, it was possible to make prototypes made up of multiple screens linked together with clickable user-interface elements (like menu items), and details like hover states and activated states. The results represented very realistic mock digital products – which behaved in a way consistent with a real website – in other words, when a person clicked on a menu item in the prototype it would simulate them navigating to another screen, consistent with real website behaviour.
Trialling Maze as a testing tool
We wanted to be able to track and time interactions with our prototypes – in particular to see where people moved around, which parts of the screens they pressed, how they interacted with the menu structure and how long they spent. Keith Symonds from our agency partner TPX Impact (formerly Manifesto) helped us get started with Maze (https://maze.co/), an online testing platform. Maze was a good option for the sort of testing we wanted to do, as it logged the menus and screens visited, captured the durations spent on each and presented interactions as visual heatmaps.
Designing tasks for participants to complete using prototypes
In order to begin using Maze as a testing platform, it was necessary to upload the prototype and then set up a series of tasks for participants to complete. Since our tests were navigation-focused, we designed tasks to get participants using the menu structure to move around our prototypes as much as possible. We also took care to design the tasks to be as authentic and true-to-life as possible, and to order these in a logical way, to maintain participants’ confidence in each prototype.
Mapping expected routes in Maze
Before making tests live in Maze, the system requires ‘direct paths’ to be set for each task. This can be done using the prototype to complete the tasks within Maze, and assigning the route used as the ‘direct path’. These paths represent a benchmark – in other words – the pathway you expect users to take through the prototype to get to the required destinations. Setting up these paths in Maze enables you to gather data about:
- Direct successes – proportion of participants arriving at the required destination by the direct route
- Indirect successes – proportion finding the required destination but by following a route different to the direct one
- Bounces/give ups – proportion not reaching the required destination
For each task we ran, setting ‘direct paths’ enabled us to make a judgement on how effectively people could use the designed navigation structure.
Recruiting people as testers
It is difficult to establish certainty that a given navigational structure is widely usable and understandable. Testing structures with as many people as possible offers increased levels of confidence in the viability of the structure as a concept. Part of the appeal of Maze as a testing platform was its in-built capacity to quickly access a panel of participant testers to complete a test. For a fee of approximately $300, it was possible to recruit 100 participants (with the option of narrowing by limited parameters – like age range) to complete the test in approximately 24 hours.
What we tested and what we found
We used Figma to build several prototypes of navigation structures, and Maze to test these. Details of what we built, what we tested and what we found are documented in the following blogs: