Seeing through usability improvements
I ran our usability testing training course again yesterday, with another 20 members of staff leaving having seen their website or application in a totally new light. In this post, I want to consider what happens after usability testing. You’ve identified some problems; how do you see your fixes through?
This might seem like a no-brainer. We have some problems that have been shown to trip up users over and over again in usability testing sessions. And yet there is resistance to change. The problem remains. Sounds crazy but it happens more often than you’d think.
What the experts say
Two usability experts – Steve Krug (author of “Don’t Make Me Think”) and Caroline Jarrett (author of “Forms That Work”) – surveyed the UX community a couple of years ago to find out what people who encountered this problem perceived the reasons to be, and Caroline ran an excellent follow up session that proposed some tactics.
A really great piece of feedback I received about my usability testing training course was on this issue. The session has been running since about 2007 and always has great feedback, but a few people had subsequently expressed their frustrations. For all their enthusiasm and hard work, they weren’t able to influence colleagues to make their website better even though the problems were there for all to see.
I worked on this last year, taking the views of a few UX professionals I know, working in other companies around Edinburgh, and developed a new segment towards the end of the session looking precisely at the problem. One of my favourite resources, and I include it in this segment, is some work by David Travis about prioritising usability problems.
My top tips
Finally, my own tips, based on my experiences here at Edinburgh and at other large public sector organisations over the past 14 years.
I’ve actually lifted this from a contribution I made to a usability discussion group a while back. The original question was:
Sometimes we have an eager group of people who want to test their website, but there is an executive or manager who, in the end, won’t agree to any changes no matter how reasonable or fact-based they are. They might disagree publicly during the meeting, or, worse, smile during the meeting and then deep-six all the recommendations afterwards. I’d love any ideas about how to anticipate and overcome this problem – it’s frustrating for everyone involved and wastes a lot of time.
I’ve had this kind of experience too. I try to do as many of the following these days and I mostly manage to avoid it, but the kind of person you talk about still crops up from time to time. I find these kinds of scenarios arise mainly when you haven’t got to know the important stakeholders well enough before you begin…
- Get key stakeholders to help you set the tasks. Talk to the key people about how they feel about their website or app or whatever. What do they think is great? What concerns them? What would they like to see happen or be developed in the future. You’ll have your own ideas about what needs to be addressed before you begin testing, but if you’re not sensitive to their positions you’ll probably have them in a corner when you share your findings and they won’t take kindly to what you say.
- If recruiting the target audience is a problem, work with stakeholders to develop a persona or two. Then ask the participants to play the role of the persona. Steve Krug says, “Recruit loosely and grade on a curve”, but sometimes this isn’t enough for the cynical change-averse boss. It might not be perfect but what you can do is – agree who Bob the persona is, what the company want people like Bob to do and what they expect Bob wants to do. You’ll only have people pretending to be Bob in the testing sessions, but hey – if 5 people pretending to be Bob independent of each other say and do the same kinds of things then it’s a slightly stronger case than talking about results from tests with any old participant. Don’t give the boss the chance to think – “that’s fine but our customers aren’t that stupid…”
- Video the whole experience. (Or better yet, get stakeholders to observe the sessions.) Reports and stats are fine, but what really gives immediacy to what you’ve found is a 30 second clip of someone having big problems or vocalising what is frustrating them. I’ve found that these carry more weight than the amalgamated findings of several participants.
- Back up the anecdotal findings with stats from the website analytics. So we saw 4 out of 5 participants do this, and our analytics indicate that x thousand of visitors are doing something similar every month. Analytics gives you the what and user testing gives you the why. Together they’re greater than the sum of their parts. I saw Lou Rosenfeld present where he likened it to blind men feeling their way round an elephant…
- Put a pound sign on it. Again, analytics is good for helping with this. Even if you have to make some fairly wild stabs in the dark as you do your sums, so long as you explain each step and the assumptions you’ve made it should be fine. In my experience, when you make estimates in this kind of thing, you tend to get -“You’re estimating the number of… why?”, -“Because we don’t know so this is our best guess based on…”, -“We don’t know x?! We need to know this!!” In the end it’s the potential loss/gain that really catches the eye. No matter what some people say, they don’t give a toss about the user experience. The challenge is to tie it to the bottom line.
Get help with usability testing
We don’t run this course as often as I’d like, mainly due to trainer availability. But if we get enough enquiries we try to run more sessions. And I’m always happy to come out to a school or unit to run it, if you can gather 10 or more colleagues together.
And of course, we can help with planning of sessions at the Website Support Clinic which runs every Tuesday afternoon.
What about you?
Have you ever encountered these kinds of problems? How did you get over them?
Leave a comment and share your experiences. I’d love to hear from you!